Creating an ERD (Entity-Relationship-Data Diagram) is one way of representing the whole picture of a concept in one, data-packed overview.
It helps with crystalizing ideas and conceptualizing them. Built with the most important entities with their attributes, and showing the relations and connections between different entities, it also provides with a rough data-base outline.
At the time of initializing our concept, February 13th, ours looked like the following.
Back then, the goal was already clear: to demonstrate how differently people see the world. As we were still scoping the project, we hadn’t made our minds yet on what type of content to focus on, or from whose point of view we’d look at the topic. Our initial ideas derived from our own experiences of platforms that we’d used frequently, such as Google and it’s (forced) accounts, Facebook or Instagram.
Our starting point was based on the intuition – or, misunderstanding…depends on whom you ask – that Google’s search results would be based on the account that one has to create in order to use the Giant.
However, as we were looking for ways to get access to this type of data, we discovered two crucial things:
Search results are based on
#1 Your previous browsing history.
#2 Your location.
Thus, even if we’d create a few accounts with different demographics, the differences wouldn’t likely be too different – even if we’d had gone for fake IP addresses.
We iterated on the topic, and pondered everything from creating several Instagram profiles to scraping Google search results with automated scripts. (Which the serach engine doesn’t like, and if suspicious behavior occurs, oftentimes blocks.)
Our current approach could be illustrated with the following:
Narrowing down to only news and content that’s (supposed to be) informative, we’re to bring the concpet closer to the users by mimicing real-life personas in the project’s pre-made profiles. Each of them will then have a customized newsfeed, which the user can then compare.
This past week, with the overarching synopsis in mind, we tested a series of initial paper prototypes that all used different information from social media to convey the same message – from Instagram (prototype 1) to Facebook (prototype 2 and prototype 3) to Twitter (prototype 4). Unfortunately, due to time constraints, not every prototype was tested twice but the following post aims to still give a holistic picture into each and the main insights.
The underlying concept with prototype 1 was to visualize how the images that accompany our visual media could differ between different ‘profiles’ built to be reminiscent of varied filter bubbles. Ultimately, it is anticipated that similar to how the news headlines and textual content would differ, the visual media would as well. Using Instagram as the social media in question, the scraped images (shown here as gray boxes) slowly start to differ from one another over time as the filter bubble effect intensifies.
For prototype 1, there were two separate iterations to explore how the division of sourced Instagram material could be presented. Version A, on the left, explored how the images could multiply over time whereas version B, on the right, explored how the images themselves, rather than quantity, could change over time. In both cases, the option to swipe the headline tab up allowed users to see the corresponding textual data if desired.
Prototype 1 was tested with two separate users.
All in all, both users vastly preferred version A, which is the option where the images multiplied over time – it was far more intuitive than version B. However, one point that must be addressed regarding prototype 1 is that for both users, it was very difficult to immediately understand what was happening without prior guidance. The choice to use gray boxes as an abstraction (as opposed to actual example images) was a critical mistake that caused a lot of initial confusion in the testing process. Nevertheless, the concept to see our filter bubbles through visual means was intriguing to both.
If we were to move forward with prototype 1, the leading suggestion in the long run would be to clarify some aspects of the UI to ensure that there is not too much information delivered all at once. For example, perhaps the rolling ‘Google search’ feature could be entirely taken out. Similarly, the headline bar should be made more intuitive or also taken out, with the textual information displayed in a different way if it feels necessary (i.e. simply underneath each corresponding image).
The second prototype is a newsfeed app that allow you to see how other people’s news feed look like. The personalized newsfeed content in social media platform nowadays is filtered in a way that what we see is a reflection of our and demographic/geographic information and our post/click. We see what algorithm determines us to see, we see what we “want to see”.
This prototype allows you to step outside of your individual bubble and directly step into the bubbles of others around the world and experience their views. It aims to expose the filter bubble effect to people and provide a more comprehensive way to view news.
Several world-famous persons are chosen here as example profiles. After type in a keyword in the search bar, users can get newsfeed about this topic and by clicking the different profile icons in below, they will enter a chosen profile’s newsfeed content.
(Technically, we can make the fake profile accounts by cloning each chosen profile’s twitter account through copying their post. And let the algorithm decide which news to present.)
WHAT DIDN’T WORK
The dramatic and sarcasm intention was achieved. both users felt the prototype resonated with the concept well and thought the approach to be ironic was humorous, in a thought-provoking way.
Overall, If we were to move forward with prototype 2, we have to think about how to make the “click profile to see other people’s newsfeed” easier to be perceived. And the overall interaction could be more experimental as it’s a novel newsfeed app.
The third prototype mimics a platform that many of us have used: the Facebook feed. It was built with just a few pieces of paper, which demonstrated the most important functionalities of the UI: selecting a persona, and “seeing the world the way (s)he sees it” through scrolling his/her feed.
Due to time limitations, this prototype was unfotrunately tested only once. However, the test indicated, it was simple enough and intuitive. Though, this might be thanks to the fact that most users are already familiar with this concept.
Though, lighting the topic in its natural context might strengthen its effect in a positive way. As the platform itself isn’t completely new to the audience, users’ attention isn’t perhaps in the UI’s functionalities themselves, and thus, the content itself may have more room to create the desired aha-moments.
Issue #1: (1) Although the texts were a bit unclear, (which was necessarily not a fault not of the UI itself perhaps, but just printing issues), the user felt that the differences in the opinions were noticeable.
Issue #2: (2) The user did not intuitively know that swiping across the screen in order to see a different newsfeed was possible. However, as switching between the different feeds through the personas at the top of the feed did seem to be easy, this feature could be entirely removed.
Issue #3: (0) The simplicity in the UI was appreciated. When the user was briefed that in reality it would actually be similar to a Facebook newsfeed, he was surprised and had positive expectations about how that might look like.
Issue #4: (1) When asked if the newsfeed should be categorized by place/location or by demographic, the user felt that demographic-based differences would be more interesting.
If we were to move forward with prototype 2, we could afford simplifying the UI even more, for example, by removing the unnecessary swiping possibility. Moreover, the basis for the feed should be elaborated and tested: what content is shown to the users exactly and why. Should the feed’s content lean on demographic differences, as suggested by our test user, we should be very careful with creating the personas, as the user experience then heavily depends on whether the personas are articulated enough yet not exaggerated.
The fourth prototype lays out twitter posts on a map of the world. The idea is to build fake twitter/instagram profiles and train them using hashtags and posts from celebrities. Then, based on the filter bubble built, a search is made and all the results mapped out onto the world based on location. One can then choose to compare what the world looks like from 2 different viewpoints (2 profiles) and how the bubbles between these two profiles interact with each other. Venn diagram icons at the bottom of the screen help you understand better how differently the two profiles see the world.
This screen shows the base/starting screen. Obviously, it needs a lot more clarity from a UI POV.
The profiles are selected and a search term has been entered. Ideally, however, it would have been better if the user were encouraged to do a search first.
The in the above screen, a venn diagram is showing addition: All the results for both profiles will be added and shown. In the screen below, it is showing the mutually inclusive mode: Only results that are shown to both the profiles are listed.
What didn’t work:
• The idea got communicated quite clearly, but had to be explained beforehand
• The users initially didn’t understand that these were twitter/instagram posts
• The navigation/flow was a bit confusing ( why choose profiles before searching results)
• The venn diagram icons were a bit confusing for the users
• Create a higher fidelity mockup
• Use real posts/images
• Map the user journey better
• Explain Venning functionality better in the UI
“We say, “Doesn’t this fish taste delicious?” or “Doesn’t that painting look beautiful?” but we never know what the experience is like in another person’s mind. ”
– David Eagleman
Our perceptions are shaped by previous experiences – ours and others – and by emotions, by beliefs and religion.
And, by circumstances and things that, sometimes, seem to “just happen”.
Who would have guessed?
Only 19 days ago, the prime minister of Finland stated that “there’s no need to worry too much right now”. Today, less than three weeks later, the country is about to be closed.
How could this happen?
The first COVID-19-case confirmed in Finland was on January 29th, 2020. Who would have guessed that in less than three months we would end up seeing this day, when the borders of Finland are about to close?
This should not have come as a surprize though. Only within EU, already nine countries have started quarantine: The Czech Republic, Cyprus, Denmark, Hungary, Latvia, Lithuania, Poland, Slovakia, and Spain have announced they would close borders to all foreigners. Outside of the EU zone, for example, Norway has done that too.
Shall we panic or shall we not?
Shall we take this seriously or shall we not?
It really depends whom you ask.
Some have said that the panic is larger than the case itself, yet they’ve ended up taking harsh actions afterwards.
Some consider it’s not their case.
“Patrons sit outside a bar along the Venice Beach Boardwalk Sunday, March 15, 2020, in Los Angeles.
Young and healthy people have a lower risk of contracting Covid-19, the disease caused by the coronavirus—but public health experts told Forbes they should still stay home as much as possible because it can still present a risk to them, the U.S. health system and at-risk populations.”
– Rachel Sandler, Forbes
“I think social media communication is very much reflecting our fears and concerns with the virus, and this should be no surprise. As people struggle to learn more about it, to cope with the disruptions and seek to understand how they should deal with it, they are using social media to accomplish those goals and to express their fear and uncertainty.”
– Jeff Hancock, Stanford, Prof. of Communication
Meanwhile in Italy: “Authorities have been working to set up hundreds of intensive care beds in a specially created facility in the Fiera Milano exhibition center but are still waiting for sufficient respirators and qualified personnel.”
– James Mackenzie, USNews
Whom should we believe?
The epidemia and media around it demonstrate how differently the world is perceived.
Much of our world today is segmented into filter bubbles with a seemingly infinite amount of organizations constantly profiling us to filter the content (such as news articles) we see based on what we are algorithmically pre-determined to like. Moreover, the way we’re fed is also a result of our own decisions: what we’ve liked, what our friends have liked, and, what we’ve decided not to see, navigating what’s ‘most relevant content to me’.
If I could see what you see
In light of this, we seek to develop a dynamic visualization that exposes the bubble effect and speaks to the greater relationships that exist between filter bubbles. This may not be necessarily about being for or against a particular issue, but rather, how demographic or geographic information influences and colors the media we are served on the issues; and, maybe the way our own decisions shape the content, too. Ideally, the visualization allows you to step outside of your individual bubble and directly step into the bubbles of others around the world and experience their views firsthand.
A relatively recent Bloomberg article presents a paradoxical statement: We’re having more elections that ever before, but our world isn’t necessarily becoming more democratic. In fact, it’s becoming less.
As part of our benchmarking exercise, we delved deeper into the many insightful visualizations in the article. (To read the full article for yourself, follow the link here.)
Given that Bloomberg is a news agency based in the UK, the likely intended audience for the article is a Western one. Even though politics and heavier subject matter (such as the state of our democratic world) are topics that tend to skew older, 48% of online traffic to Bloomberg actually comes from those under the age of 34. Therefore, it’s likely that this article was meant for a both a diverse and wide age range.
The purpose of the visualizations themselves in the article is to call attention to these recently alarming trends, motivating readers to think critically about elections and be active in them. Through contrasting color tones (blue and red), the article makes a clear distinction “more democratic” and “less democratic” categories, helping aid the digestion of otherwise rather dense data sourced from the V-Dem Liberal Democracy index. By continuously reaffirming these distinctions, much to the surprise of the reader, the message becomes apparent: we must not make blanket assumptions that our level of democracy will always remain the same without effort and action.
Between the different visualizations, the consistency in colors works well in communicating this message. Similarly, there is only one parameter through which to evaluate the visual data, which makes it easy to jump from one visualization to the next without spending too much time understanding what is its individual message. The very first visualization in the article (pictured first in this post) also included a dynamic element to where when a user scrolls down, small messages flash onto the screen to highlight specific elements of the visualized data. This successfully helps feed some summarized textual input to the reader as well even if he or she does not fully read the article itself.
Nevertheless, some aspects of the visualizations were not as successful. When quickly going through the article, it was not always clear what the difference was between each of the visualizations, especially given that the overall message stayed generally quite consistent. Of course, through a more diligent reading, the differences become more apparent, but this same level of effort is not always guaranteed with modern-day audiences online. Similarly, the icons that were meant to encourage interaction were often personally confusing upon first glance, and it wasn’t always clear what one needed to do in order to receive more information from the visualization.
Benchmarking what we’d do, I came across Democracy 3. It’s a semi-veteran government simulation game, published already in 2005 by Positech Games.
The player takes either the role of president or prime minister of a democratic government, and rules his country. He’s in charge of (almost?) an innumerable amount of things: from the big picture, including budget and tax policy, to very detailed ‘yes-no’ decisions on specific questions.
Also, arbitrary challenges take place, e.g. criminality, demonstrations and environmental problems.
The game requires a decent understanding of the government – thus, the target audience may be people of age +15, who are somehow into somewhat analytic and strategic, semi-realistic games.
The game also yielded a spin-off version in 2016, Democracy 3: Africa, with its own twist: when you’re ruling your country, either you’re to solve the difficult problems (such as forced marriages, poverty, children’s malnutrition), or to go deeper into fierce dictatorship.
This is democracy gamified – it concretizes that the decisions they’re making there aren’t light nor easy to make. Maybe there’s some kind of educational aspect to the game, too; assuming you’re a US citizen or that at least you know their governmental system quite well.
There are some controversial things in the game, regarding real-life: as an example, it’s possibile to adjust death penalty, legalize drugs etc.
View this post on Instagram
Globally, democracy is in retreat. In 2019, The Economist Intelligence Unit’s Democracy Index fell to the lowest average since it began in 2006. Just 22 out of 167 countries attained “full democracy” classification. More than 2.7bn people live in an authoritarian regime, the bottom tier of the Democracy Index. Democracy scores for China, Malta and Hong Kong deteriorated, and India, the world’s biggest democracy, dropped ten places to 51st. Still, some silver linings can be detected among the clouds. Which countries have moved up the democracy ranking? Click the link in our bio to find out, and to read why global democracy has had another bad year.
What is this?
-Global democracy visualization map
-Social media user
-In 2019, The Economist Intelligence Unit’s Democracy Index fell to the lowest average since it began in 2006. This map was to bring it to people’s attention that global democracy has been declining.
-Shows how different democracy index are distributed in different countries of the world
-It gives people a sense how different level of democracy is distributed around the world, in a quick and minimal way
What not? Why?
-It doesn’t show how is global democracy is in “decline” as it states in the title. Becauses there is now comparison information from the past years. Also it didn’t show how the “democracy index” is calculated.
After a week of basic research into the areas of Lights and Lighting that have interested each one of us in the group:
Ameya: Types of lighting models and illumination techniques computer graphics
Maija: Mathematical interpretation of computer graphics lighting
Kiko: Material Design and trends in GUI Lighting
Jenna: Effects of lighting on the perception of interfaces and content
We will present 3 minutes each followed by
My research into the lighting of computer graphics leads me to explore certain concepts and considerations that determine what to calculate and what to leave out, based on the viewing requirements. Advances in computation have now made it possible to create accurate simulations of real-world lighting today.
I will introduce some of the concepts in this post.
Types of Light Sources
Point Light: Light emanating from a source in all directions, with the intensity of light decreases as a function of space. Eg: Light bulb
Directional Light: Uniform lighting from one direction, such as sunlight. The source is modelled to be infinitely far away from the object being illuminated, and the intensity of the light stays constant (the change or decrease in intensity as a function of space is negligible) Eg: Sunlight
SpotLight: A directional cone of light, with the intensity being brightest in the central axis of the cone. Eg: Flashlight
Ambient light: No source is modelled. Ambient light is uniformly distributed throughout the scene and is independent of the direction, intensity or distance of the object being illuminated.
A few other types of light are volume lights and area lights, but these are just particular use-cases of the four types of light sources.
Diffuse Lighting: This models how light interacts with the surface of the object, creating lighter and darker pixels to simulate dark and light parts. This simulates the material and texture of the object.
Ambient Lighting: Makes the shape of the object visible even when no light source is modelled. It gives a flat, 2-dimensional representation of the object for a given perspective.
Specular Lighting: This models the highlights and shininess of the object, depicting smoothness and metallicity or reflexivity of the object.
Local Illumination: Light interactions are calculated only with individual objects in the scene separately. This gives a more unrealistic or simulated lighting effect, but it is much faster and takes comparatively lesser computational power.
Global illumination: Light interactions are calculated considering all objects in the scene, including secondary interactions based on reflection and refraction of light with other objects. This gives a more realistic effect but at the cost of higher computational power.
One smart way to reduce the number of calculations through the scene is to go backwards from the viewers perspective to the light source, which ensures that the only interactions calculated are the ones that are relevant to the viewer’s perspective.
In Google’s Material Design System, UIs are displayed in an environment that is a metaphor of the physical world. It is inspired by the physical world and its textures, including how they reflect light and cast shadows.
In the physical world, objects can be stacked or attached to one another, but cannot pass through each other. They cast shadows and reflect light.
Material Design reflects these qualities in how surfaces are displayed and move across the Material UI. Surfaces, and how they move in three dimensions (3D), are communicated in ways that resemble how they move in the physical world, using light, surfaces, and cast shadows.
The goal of Material Design is to create hierarchy, meaning, and focus that immerse viewers in the experience.
In the Material Design environment, virtual lights illuminate the UI. Key lights create sharper, directional shadows, called key shadows. Ambient light appears from all angles to create diffused, soft shadows, called ambient shadows.
Shadows in the Material environment are cast by a key light and ambient light. In Android and iOS development, shadows occur when light sources are blocked by Material surfaces at various positions along the z-axis. On the web, shadows are depicted by manipulating the y-axis only.
In Material environment, you should always combined shadow from key and ambient lights.
Material surfaces at different elevations cast shadows. As you can see from video, the smaller the elevation value, the more solid the shadow gets.
Because shadows express the degree of elevation between surfaces, they must be used consistently throughout your product.