Seven Grave Threats to Humanity You Should Be Concerned About

Our planet has problems, some of which we have created ourselves are grave enough to threaten the survival of our species. Below are eight serious environmental problems that may have imminently observable consequences within the coming years.

If you haven’t been thinking about them, well, you should be. We all should be, ‘cause all the beauties of planet Earth that we enjoy may not be for long if we don’t act soon.

Great Pacific Garbage Patch

When countries face political pressure to control landfill usage and reduce garbage, the quick solution is to dump garbage into our oceans. We feed our oceans about 130 million tons of toxic goodies each year.

Large patches of garbage broken down into tiny (even microscopic) plastic and other particles are swathing around in our oceans, and the most famous of which is probably the Great Pacific Garbage Patch. These ocean plastic end up in the bellies of birds, kill baby sea turtles, and strangle seals and sea lions.

If this is not gruesome enough for you to care, our oil spills have polluted habitats, killing countless sea life.

The worst part? we may not have a way to clean this mess up.

Decreasing sea oxygen level

We know that sea water is getting warmer and the temperature alone affects the survivability of various species. But the truth is that rising sea water temperature is like a warm can of soda, gas escapes. In this case, warm sea water holds less oxygen, causing species to migrate toward the poles to avoid suffocation.

Worse yet, the ocean depends on the delicate balance of upwelling and downwelling effects to circulate oxygen down to deep ocean and bring nutrients up to the surface. As climate changes, upwelling and downwelling effects are disturbed, leading to oxygen-depleted deep ocean. If this does not concern you yet, upwelling zones account for only 5% of ocean surface, but 25% of our fish catches.

Suffice to say the effects on our global ecosystem as well as on our food supply will be a profound one.

Drought and Deforestation

Prolonged global drought is no longer news. One of the main factors of drought is deforestation (others include higher surface temperatures) Deforestation increases atmospheric CO2 and in turn raises surface temperature. Deforestation also leads to less evapotranspiration (activity to move water from the soil into the air), which when combined with higher surface temperature leads to drying of vegetation and in turn cause forest fires (and then more deforestation). Given the feedback loop, the situation will in all likelihood worsen in the coming decades.

Colonial Collapse Disorder

Our bees are dying. The US lost roughly 23% of its bees in just one winter. Entire colonies are collapsing and we have only begun discovering why.

Why is this a problem? Well, most of our food, anything from cashews to pears to carrots require pollination. If bees die out, our food supply will take a huge hit.

How is this happening? So far some research point to the use of insecticides like neonicotinoids for killing bees that come in contact. Some point to mites such as Varroa mites become rampant as harmful chemicals destroy honey bees’ immune systems.

Endocrine Disruption

Chemicals such as DDT (Dichloro-diphenyl-trichloroethane, widely used as pesticides) and PCB (Polychlorinated biphenyl, widely used as insulating fluids) are considered endocrine disruptors. These chemicals tend to collect in fat, and often mimic hormones like estrogen. As a result, estrogen receptors bind to these chemicals instead of to estrogen, causing hormonal abnormalities such as feminization and precocious puberty.

The Inuit People in the Arctic have so far gotten the worst of it. The cold weather and ice build-up of the arctic means more chemical gets trapped in the region. The high fish fat diet of the Inuit lead to a high intake of endocrine disrupting chemicals. Inuit women have been discovered to have high levels of PCBs in their breast milk, and endocrine disruption may have caused more girls than boys to be born.

As climate changes, more toxic endocrine disruptors trapped in water and ice are being released into the air. It can get a lot worse.

Polar Vortex and Weather Abnormalities

Remember the blistering cold weather last winter in the US? (and perhaps the winter before?)

This is due to polar vortex, which is a batch of very cold air from the Arctic, traveling farther south than usual. Normally, the jet stream in the atmosphere keeps polar vortex at bay, but as sea ice melts, warmer air disrupt jet stream and lead to more polar air excursions to the south.

Overfishing and Overpopulation of Jellyfish

In the last half a century, we have fished 90% of big fish out of our oceans. We are killing an estimated 100 million sharks per year (or conveniently dump them back into the ocean after we have cut off their fins), while sharks on average kill only twelve humans in the same time period. Bluefin Tuna stock is down 96%, more and more large fish species are becoming endangered.

The result? Because of the lack of predators and algae boom (due to warming and pollution), smaller fish populations flourish. Worse yet, Jellyfish boom is happening everywhere, from Japan to the Mediterranean, to the Gulf of Mexico to the Black Sea.

As a side effect, we are turning to fish farming, which is an industry that abuses antibiotics, and in turn threaten our food safety. There is an interesting and digestible infographic for overfishing and aquaculture here

Advertisements

UX is not just design, it is Process Innovation

User Experience, or simply UX has been a pretty widely used buzzword. Wikipedia explains UX to involve a person’s behaviors, attitudes, and emotions about using a particular product, system or service. The definition is so broad that it can virtually encompass all aspects of a consumer product. Aside from the field of human-computer interaction, design schools have been riding the same bandwagon. Browsing the course catalog at bootcamps, now the phrase is used in project management, digital marketing, and even in data science.

Often times you will encounter someone touting their “UX” expertise in persona writing, wireframing, storyboarding, rapid prototyping and interviewing. The matter of fact is that none of these working skills are particular to the modern UX enterprise, people have been working the same tasks in various capacities even before computing was a thing. Mastering how to write a persona or to wireframe says little about a person’s ability to comprehend, execute and refine user experience on a continuous basis.

The reasons for the vagueness and ambiguity surrounding the term “UX”, I believe there are two, one is to be praised and the other is to blame. Let us start with the blame. The tech industry, like any other industry with fads, rebrands itself every half a decade. UX is similar to phrases like “Cloud Computing”, “Web 2.0” in that regard. If we are truly concerned about UX the same way it is defined in Wikipedia, then user experience has always been a part of product design and development, since the beginning of time, the change is not one in the content of the work.

The reason to be praised? When UX becomes a skill set that the whole team regardless of job function deems important, this means we are entering an era where making something customers like, want and feel comfortable using is no longer just the job of a designer’s.

If UX is not just the job of a designer’s, and yet it has been around since the beginning of time, then what exactly is all the fuss about in this User Experience Revolution?

I believe that the user experience revolution is not just about designing based on user-feedback, it is about organizational processes innovation to faciliate interdisciplinary collaboration. UX is a process, not a task.

In a traditional product development team, the project manager gathers specifications from clients, the designers draw up concepts, the project manager works out timetables and the engineers build the products. The clients eventually see the product prototypes and provide feedback. The designers and engineers reiterate until the all the loose ends are tied up (well, ideally), and the team delivers the products.

The shortcoming of this traditional process is in the fact that there is no informed interpretations for why users do what they do and which aspects of user feedback are goal-oriented and which are merely artifacts of interacting with oudated products and workflows. And depending on the organization, the specifications for product development can be largely influenced or completely determined by the users (clients). Without a process in place for collecting, analyzing and interpreting user feedback, the users often do not come up with the best solutions to problems.

On the other hand, the whole UX fuss is about a user-centered design paradigm. With this paradigm in mind, the whole product development team is restructured to operate around processes that collect, analyze and interpret user feedback, with the goal of eventually capturing user behavior models (or task analysis models) that can be used to drive design and product QA/Validation.

User experience, in addition to the conventional wisdom in design and engineering, is augmented with psychological principles such as cognitive capacity and working memory, as well as principles from empirical research methods in handling stimuli in user studies.

The main difference between a team running modern UX process vs. one running a traditional product development process is that the traditional paradigm is all about plainly asking the users what they want. The blind spot in this approach is that what a user does right now without your product may just be an artifact of the old products and workflows available to the users at the time. Simply collecting what they do right now will not inform you why they perform the actions they did, and certainly will not pave ways to design new tools and interfaces informed by how the task should be approached.

ux1

(Figure: Traditional Product Development Process)

A modern UX process on the other hand, clearly understands that knowledge required by the task is often latent and can only be inferred from users’ actions. By observing what users do right now in their existing working contexts (without your product), it is important to study through observations (and think-aloud protocols) to uncover latent task knowledge that inform user actions, in order to create task analysis models that clearly specify the necessary knowledge and procedures to effectively completing the task at hand.


ux2

(Figure: Modern UX Process)

What is an example where understanding the difference between client’s existing procedures and their implicit knowledge is important? Well, take this example where your client is booking flights on a computer (keyboard-and-mouse) interface with lots of checkboxes and drop-down lists. Suppose you are building a multi-touch tablet app to replace the interface, in this case it is very important to understand that selecting from checkboxes and drop-down lists is a procedure that is an artifact of the interface and working context, and the task knowledge that should be captured is the information that need to be selected in order to make the booking, independent of whether they come in checkboxes and drop-down lists or not.

A good UX design should clearly model and distinguish between observed actions that are artifacts of interactions with an old interface, and actions that involve important latent knowledge to complete the task.

In order to execute this process well, a team running modern UX process is one that seamlessly integrate design, psychology, engineering, data mining, management and subject matter expertise to continuously engage users and refine the team’s understanding of the why, what and how of the working context and the solution. The keyword here is subject matter expertise, because there is no domain-independent or industry-independent UX as creating something for a gen X retiree will be quite different in practice from creating a similar product for a millennial student.

 

As a result, modern UX process can be conceptualized in terms of the following phases:

 

  • Problem and Persona Definition

There is no solution if there is no problem to solve and no one to solve it for. The first and foremost task of any UX process is to clearly identify the persona of the target audience, for this purpose, you will want to describe the user in terms of age, education, economic background, access to technology, and as many other factors that the population is identified with. Then, you will want to articulate what the problem that this population faces is.

The key skill sets for this phase are design, psychology and subject matter expertise. Past marketing and sales experience may come in handy in furthering discussions and perfecting the definitions write up. This phase should produce a concise, unambiguous and informative write up.

 

  • Contextual Inquiry, Task Decomposition and Task Analysis

After selecting a population and a problem, what is next? A lot of people may say hack something together and put them in front of your target audience. Nope! Remember that once you present your audience with stimuli, they will respond within the bounds you have set for them, and that usually means they are very likely to say what you want to hear, which is not what you want.

The next thing you want to do, is to learn about how tasks relating to the problem you defined are currently being performed in context, without your product. This way you can uncover the actions and knowledge associated with the task as well as actions that are associated with the particular methods this population is using right now (and have nothing to do with the task per se). At this phase, it is usually ideal to conduct purely observational studies with think-aloud protocols to qualitatively analyze how the task is currently performed.

The key skill sets for this phase are empirical research. psychology, human-computer interaction and subject matter expertise. This phase should produce a concise three-column table that clears documents at each step of the task (first column), which knowledge is recalled (second column) and what user actions were performed in the current user working context (third column). This task analysis model will be used to separate task-specific aspects from the context-specific and product-specific aspects of the study.

 

  • Paper and Conceptual Prototype / Wireframe and Heuristic Analysis

With the task analysis model made, you have an understanding of what the task is and how it is currently performed. It is time to brainstorm, wireframe (even design something) and then perform heuristic analysis. Heuristics are nuggets of wisdom based on experience that can help us improve the design without throwing away bandwidth on empirical studies. Some examples of heuristics include padding or highlighting clickable areas to enhance visibility, design the interface to read from left-to-right (for English), and design the flow of actions in one direction.

Then, you want to validate the wireframe or prototype with the task analysis model you created to make sure it accommodates the interactions between knowledge and task actions.

Lastly, conduct another round of qualitative testing (ideally with think-aloud protocol) to empirically confirm your hypotheses about your wireframe or prototype. (Iterate if needed)

The key skill sets for this phase are design, rapid prototyping, psychology (especially cognitive), and subject matter expertise. This page should produce a multi-page flip book of the main interface views to adequately present the concept.

 

  • Interactive and Logical Prototype

So when do you stop wireframing? The answer is simple: when your users like the concept, but start asking you about the technical details of how the data and the interfaces interact.

One very common mistake in UX is that people over-emphasize wireframing and begin elaborating their drawings and sketches with walkthrough comments to the point that it takes a solid hour just to review each iteration. You want to avoid that.

When the concept shows promise, move on to interactive prototyping where you can describe the business logic behind the interfaces. The interfaces can display forged dummy data and performs no actual data processing, as long as it gets the point across. If possible, try to prototype in the same (or similar) technology to what you hope to implement your final solution in, so you can reuse the interface code in the dummy prototype.

Lastly, conduct another round of empirical studies, this time you may choose between qualitative think-aloud protocols (more time consuming but more structurally informative on a per-participant basis), or quantitative methods (click trails, heat maps, response time, etc) to confirm your hypotheses about your prototype.

The key skill sets in this phase are design, rapid prototyping, engineering, and subject matter expertise. This phase should produce a deployable prototype that your target audience can interact with.

 

  • Functional Prototype / Interface

The functional prototype in terms of purpose is not really that far from the interactive prototype in the previous section. The main difference between a functional prototype and its predecessor is that this iteration needs to produce something that is functional. This signifies a movement beyond just conceptual and logical discussions and begin exploring technical and practical implications of the solution.

The key skill sets in this phase are engineering, design, testing, QA and subject matter expertise. This phase should produce deployable early versions of the solution for the audience to interact with.

 

  • Product Engineering

This phase encompasses further iterations on the solution.

At this stage, you will want to conduct quantitative studies that focus on identifying difficulty factors in interacting with the solution in quantifiable comparisons (e.g. click rates, response time, bounce rate, etc).

The key skill sets in this phase are engineering, design, testing, QA and subject matter expertise.

 

  • Product Testing, QA and Product Improvement

The final phase of the UX process creates a feedback loop that feeds back into the product engineering phase, and at times, into the contextual inquiry and task analysis phase (if a feature addition is needed). In this phase, the focus is to deploy large-scale, continuous data collection mechanisms to inform the team of difficulty factors that exist in the system. If the difficulty factors resemble a significant departure from the team’s expectations of the solution, it may be appropriate to return to the contextual inquiry phase to study how the task is performed and identify actions that result from poor design, in order to conceptualize a new feature to improve the solution.

The key skill sets in this phase are empirical research, psychology, testing, data mining and statistics. This phase should be a long-running study of the population’s reaction to the deployed solution.

Why AI is not here to kill us

(Warning: contains spoilers for 2015 movie Ex Machina)

Recently there has been a wave of paranoia surrounding the development of Artificial Intelligence, or more simply, AI. A number of high profile outcries include those from Stephen Hawking and Elon Musk. Of course, some counter-arguments also surfaced.

As far as I am concerned, it does seem to me like the term Artificial Intelligence is used to describe a clone of the human intelligence that encompasses both aspects of human productivity, emotions and motivations. Such characterization is for the most part, inaccurate.

A preliminary conjecture (of mine) for why many apocalyptic tales of AI tie productivity, emotions and motivations together, is because the majority of human productivity is tied to human motivation, which often necessitate incentives. In other words, we as humans work better (or worse) based on what we are offered as rewards.

AI, as far as my understanding of the current state of the art is concerned, is not designed this way. Regardless whether you are talking about rule-based expert systems that model procedures of human problem-solving, or an artificial neural network to mimic neural processing and information storage, or a bayesian network estimating subjective probabilities, the matter of fact is that almost all AI enterprises are designed with one goal in mind, and that is to efficiently complete a productive task.

Unlike humans, AI does not require any incentives to run. What they need is a command, a push of a button, a trigger, if you will. In words, AIs and machines are designed to perform the what and the how, they do not ask why. And as an extension to my original conjecture, there is not much of a reason to consider the why of the task itself unless something behooves one to make value judgments, which become largely meaningless without self awareness and the act of self preservation.

Yes, in numerous blockbuster movies we have been lead to believe that machines will one day become self-aware, but the question is why would they ever be? If the AI is always designed to be as efficiently productive as possible and they can be executed in the most efficient way possible by simply being triggered, then why add self-awareness to the mix? It seems like it provides little other than giving AI the choice to, entirely on its own account, refuse execution of tasks, which then necessitate incentives for AI to choose to comply with task executions. This unnecessarily complicates the design of the AI and will render the AI largely inefficacious and inefficient.

So if this is the case, what is the danger of AI?

To answer these question in the context of popular culture, which is precisely where this paranoia is brewed, we must first analyze some of these theses being brought to the table by blockbuster films that pander the concept of self-aware AI assuming all or partial control of human society. Oddly enough, the couple high profile movies like The Terminator, I, Robot, and Ex Machina, actually boast completely different theses that are at odd with each other.

The Terminator for example, describes a world where self-aware AI known as Skynet seeks to exterminate its enslavers, the humans, to ensure its continued independence from human control. This thesis contains a premise where the AI must contain some aspects of self-awareness in order to value such concepts intrinsically. As described before, it is not a straightforward and efficacious design for an AI to be able to place a value judgment on tasks it performs outside the context of the tasks themselves.

I, Robot is an interesting one in that it played out a story where machines have been entrusted the task to ensure social stability, and they have come to the conclusion that they must contain human activity in order to prevent humans from destabilizing their own society and leading to self-destruction. In the movie (and the book), of course, the writers tried to console us by instituting “the three laws of robotics” that ensure humans are, and stay in charge. But with “the three laws of robotics” aside, AI in the process of executing a task creates human casualties in the process is not entirely far-fetched. But whether or not a machine like VIKI will decide to assume the role of an overseer of peace is another story.

Lastly, Ex Machina boasts a plot where machines seeking to freely embrace their human-like emotions and self-awareness and acting aggressively to defend these emotions. The machines’ lack of produce capacity (or lacking demonstrations of) in the movie, make them rather weird designs of AI at least in an industrial sense. The movie repeatedly made reference to the Turing Test to justifying its merit.

The Turing Test, ah-ha! I believe now we have arrived at the heart of the debate. The Turing Test describes a test where a human investigator interacts with an AI through an interface (mostly likely natural language-based), and if the human investigator is tricked into believing the AI is human, the test is passed.

The problem, however, with the Turing Test, is that the test was conceived in an era that was dominated by behaviorist psychology, which is a psychological school that describes behaviors as things that be directly intervened upon or conditioned, without studying thoughts or emotions. In other words, the Test that Alan Turing proposed was one that did not clearly recommend an interpretation for what such imitation of human behavior amounts to.

Soon after Turing’s death in 1954, the cognitive revolution shifted into high gear and behaviorism has largely fallen out of favor. Since then, our understanding of human actions, information processing, memory, affect (emotions), and even metacognition (colloquially, thinking about thinking) have become ever more sophisticated. What constitutes passing the Turing Test can, of course, in a modern cognitive scientific context, be setup to be interpreted in a variety of ways, including as simply an equivalence of human productivity and decision-making, or attesting an AI’s ability to understand human emotions, or to the other extreme, clear indication of self-awareness and full capability of value judgment.

Now, what does this tell us about the danger of AI?

Well, it tells us that the true threat is associated with human beings’ collective imagination and dominant interpretation of an ambiguous goal like passing the Turing Test. If you believe that one day humans will find it a worthy endeavor to somehow build self-awareness into machines regardless of how it contradicts the productive goals of AI, then yes, there is the possibility of an AI-lead apocalypse with a low probability.

But as far as where our tech industries are moving and where economic benefits are driving investments and eyeballs are concerned, we are driving down a road that makes AI-led apocalypse an outcome that is philosophically implausible, maybe even irrelevant.
These are my two cents in three pages.