This past week, with the overarching synopsis in mind, we tested a series of initial paper prototypes that all used different information from social media to convey the same message – from Instagram (prototype 1) to Facebook (prototype 2 and prototype 3) to Twitter (prototype 4). Unfortunately, due to time constraints, not every prototype was tested twice but the following post aims to still give a holistic picture into each and the main insights.
The underlying concept with prototype 1 was to visualize how the images that accompany our visual media could differ between different ‘profiles’ built to be reminiscent of varied filter bubbles. Ultimately, it is anticipated that similar to how the news headlines and textual content would differ, the visual media would as well. Using Instagram as the social media in question, the scraped images (shown here as gray boxes) slowly start to differ from one another over time as the filter bubble effect intensifies.
For prototype 1, there were two separate iterations to explore how the division of sourced Instagram material could be presented. Version A, on the left, explored how the images could multiply over time whereas version B, on the right, explored how the images themselves, rather than quantity, could change over time. In both cases, the option to swipe the headline tab up allowed users to see the corresponding textual data if desired.
Prototype 1 was tested with two separate users.
- Issue #1: (3) For both users, there was initial hesitation in how to proceed after picking one profile. This was a fault of the prototype design, as the first chosen profile should have immediately gotten a ring around it instead of after choosing two.
- Issue #2: (0) One user intuitively clicked on an image, expecting that the image itself could pop up and scale up in size much like happens on the Instagram app. This is a helpful note to incorporate in future iterations, as it can be difficult to assess the content of the images if they are scaled down too little.
- Issue #3: (2) For both users, the option to swipe up the headline bar was not intuitive and left them confused. It is not a critical flaw as the option to view the textual data was seen as an additional feature but it nonetheless would need to be clarified in future iterations.
- Issue #4: (1) Again an issue likely due to the nature of paper prototypes, but one user thought to search for the topic, naturally, in the bar that said ‘Google search’ instead of the hashtag search function above. The intent for this was to be a non-interactive, constant update of google searches based on the topic from which the headlines would be generated from, but it falls short of that. This will need to be re-evaluated in future iterations as it is likely that even a digital prototype would be similarly confusing.
All in all, both users vastly preferred version A, which is the option where the images multiplied over time – it was far more intuitive than version B. However, one point that must be addressed regarding prototype 1 is that for both users, it was very difficult to immediately understand what was happening without prior guidance. The choice to use gray boxes as an abstraction (as opposed to actual example images) was a critical mistake that caused a lot of initial confusion in the testing process. Nevertheless, the concept to see our filter bubbles through visual means was intriguing to both.
If we were to move forward with prototype 1, the leading suggestion in the long run would be to clarify some aspects of the UI to ensure that there is not too much information delivered all at once. For example, perhaps the rolling ‘Google search’ feature could be entirely taken out. Similarly, the headline bar should be made more intuitive or also taken out, with the textual information displayed in a different way if it feels necessary (i.e. simply underneath each corresponding image).
The second prototype is a newsfeed app that allow you to see how other people’s news feed look like. The personalized newsfeed content in social media platform nowadays is filtered in a way that what we see is a reflection of our and demographic/geographic information and our post/click. We see what algorithm determines us to see, we see what we “want to see”.
This prototype allows you to step outside of your individual bubble and directly step into the bubbles of others around the world and experience their views. It aims to expose the filter bubble effect to people and provide a more comprehensive way to view news.
Several world-famous persons are chosen here as example profiles. After type in a keyword in the search bar, users can get newsfeed about this topic and by clicking the different profile icons in below, they will enter a chosen profile’s newsfeed content.
(Technically, we can make the fake profile accounts by cloning each chosen profile’s twitter account through copying their post. And let the algorithm decide which news to present.)
WHAT DIDN’T WORK
- Issue #1: It was not easy for testers to recognize the profile, not immediately apparent from the icons who they are.
How to fix: Add names to the different profiles to indicate who they are.
- Issue #2:Did not initially realize that the first newsfeed page was your own one.
How to fix: could be fixed by adding your own user profile icon on selected mode.
- Issue #3:Did not provide enough content in this prototype for scrolling down or clicking for reading the article.
How to fix: Will make sure to have the full interaction flow in the prototypes.
The dramatic and sarcasm intention was achieved. both users felt the prototype resonated with the concept well and thought the approach to be ironic was humorous, in a thought-provoking way.
Overall, If we were to move forward with prototype 2, we have to think about how to make the “click profile to see other people’s newsfeed” easier to be perceived. And the overall interaction could be more experimental as it’s a novel newsfeed app.
The third prototype mimics a platform that many of us have used: the Facebook feed. It was built with just a few pieces of paper, which demonstrated the most important functionalities of the UI: selecting a persona, and “seeing the world the way (s)he sees it” through scrolling his/her feed.
Due to time limitations, this prototype was unfotrunately tested only once. However, the test indicated, it was simple enough and intuitive. Though, this might be thanks to the fact that most users are already familiar with this concept.
Though, lighting the topic in its natural context might strengthen its effect in a positive way. As the platform itself isn’t completely new to the audience, users’ attention isn’t perhaps in the UI’s functionalities themselves, and thus, the content itself may have more room to create the desired aha-moments.
Issue #1: (1) Although the texts were a bit unclear, (which was necessarily not a fault not of the UI itself perhaps, but just printing issues), the user felt that the differences in the opinions were noticeable.
Issue #2: (2) The user did not intuitively know that swiping across the screen in order to see a different newsfeed was possible. However, as switching between the different feeds through the personas at the top of the feed did seem to be easy, this feature could be entirely removed.
Issue #3: (0) The simplicity in the UI was appreciated. When the user was briefed that in reality it would actually be similar to a Facebook newsfeed, he was surprised and had positive expectations about how that might look like.
Issue #4: (1) When asked if the newsfeed should be categorized by place/location or by demographic, the user felt that demographic-based differences would be more interesting.
If we were to move forward with prototype 2, we could afford simplifying the UI even more, for example, by removing the unnecessary swiping possibility. Moreover, the basis for the feed should be elaborated and tested: what content is shown to the users exactly and why. Should the feed’s content lean on demographic differences, as suggested by our test user, we should be very careful with creating the personas, as the user experience then heavily depends on whether the personas are articulated enough yet not exaggerated.
The fourth prototype lays out twitter posts on a map of the world. The idea is to build fake twitter/instagram profiles and train them using hashtags and posts from celebrities. Then, based on the filter bubble built, a search is made and all the results mapped out onto the world based on location. One can then choose to compare what the world looks like from 2 different viewpoints (2 profiles) and how the bubbles between these two profiles interact with each other. Venn diagram icons at the bottom of the screen help you understand better how differently the two profiles see the world.
This screen shows the base/starting screen. Obviously, it needs a lot more clarity from a UI POV.
The profiles are selected and a search term has been entered. Ideally, however, it would have been better if the user were encouraged to do a search first.
The in the above screen, a venn diagram is showing addition: All the results for both profiles will be added and shown. In the screen below, it is showing the mutually inclusive mode: Only results that are shown to both the profiles are listed.
What didn’t work:
• The idea got communicated quite clearly, but had to be explained beforehand
• The users initially didn’t understand that these were twitter/instagram posts
• The navigation/flow was a bit confusing ( why choose profiles before searching results)
• The venn diagram icons were a bit confusing for the users
• Create a higher fidelity mockup
• Use real posts/images
• Map the user journey better
• Explain Venning functionality better in the UI