Ed & Tech #1: On Creativity and Learning, not mutually exclusive

It has been a while since I last blogged, now perhaps it is fitting to go back to talking about education.

Let’s home in on an important issue in education: creativity.

Or to be more exact, how ought we reform education to make it more conducive to creative activities?

Over the last couple of months I picked up a couple of books that I thought provide a good survey of the theories and the issues behind creativity, here are a couple of good ones:

  1. How Learning Works by Susan Ambrose et al. – A good survey of learning science concepts such as knowledge, motivation, and metacognition to help us understand how learning happens.
  2. Most Likely to Succeed by Tony Wagner and Ted Dintersmith – Provides an overview of problems and challenges with a testing-driven education system
  3. That Used to be US by by Thomas L. Friedman and Michael Mandelbaum – Discusses growing economic challenges in the U.S primarily due to an inadequate labor force
  4. Change.edu: Rebooting for the New Talent Economy by Andrew Rosen – Discusses challenges in education and in the U.S economy that call for reform in our education systems in order for the country’s labor force to stay competitive

as well as the following paper:

  1. Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching by Kirschner, Sweller and Clark – Explores the effectiveness (or lack thereof) of pure constructivist and discovery learning methods as pedagogical methods

 

If you’re short on time, simply skim through the bold-faced items.

There is no doubt that fostering creativity through formal education is a theme, and many good (albeit contentious) points have been raised in writing. Despite all the effort, I have always felt that certain narratives are left out of the mainstream conversation due to their lack of shine and luster compared to project-based learning or computer science education. Much of that has been left out, I believe, is filtered because “creativity thrives on robust content learning” is never an attractive slogan for a reform.

I want to take this chance to contribute to this conversation by exploring additional narratives that we should consider in designing pedagogy, educational technology solutions and educational policies.

 

The missing case studies

The first thing I want to survey is the imbalance of case studies (or anecdotes, really) that pervade books on education and educational reform. By that I mean: we have plenty of case studies of 1) students who perform well in the current test-driven K-12 system and prosper, 2) students who performed well in the current system but ceased to perform in higher ed, or in the job market, 3) students who performed poorly in the current system but succeeded in learning on their own (autodidacts, if you will), and then performed well either in higher ed, or in the job market.

These stories have been regurgitated over and over, and it’s obvious that at least two archetypes of learners have been left out:

4) students who performed poorly in and outside the formal K-12 education system, and did not do well beyond that, as well as 5) students who performed poorly in the current system but succeeded in learning on their own (autodidacts, if you will), and then performed poorly in higher ed and in the job market

It is somewhat understandable why these two archetypes are left out: #4 is demotivating, and #5 is outright anti-climatic.

But the truth is that, #5 is an important and nontrivial case, that very few who talk about education want to acknowledge.

We want to believe that autodidacts are great role models for our pedagogy and they will save the world, but we generally refuse to acknowledge that a significant portion of autodidacts will be unable to tackle many challenges both in life and on the job, due to their isolated style of learning.

I am an autodidact to a large extent, inasmuch as I started teaching myself programming when I was twelve, with a couple of books and a dial-up internet connection. This was the late nineties. I grew more interested in programming because I got wholesomely burnt out by the monotony of compulsory education, which ran pipelines that fed middle-schoolers into high-schools and SAT preps, then eventually funnel into colleges. Programming became more attractive as I discover ways to learn and to create flexibly, without a prescription or a mandate.  Through the internet, I met many developers, most were in their twenties and a few were my age. This loose online association of autodidacts was a fascinating gateway to creativity, that for a bored teen like myself then, was a life-changing experience.

Eventually programming lead me to regain my interest in academic subjects such as physics, mathematics and even ones farther removed like literature, as I have learned to see content learning as a foundation to creativity. Later I was admitted to Carnegie Mellon, studied computer science and philosophy, did some research then eventually started my own company.

On the other hand, the absolute majority of the autodidacts I encountered in my teens never turned out the same way. Those in their twenties and thirties, started out as IT consultants and remained consultants. Those who were teens like me, failed to gain admission to an engineering program in college. A once close friend of mine, an autodidact who taught himself half a dozen programming languages and frameworks, a brilliant hacker who wrote scripts to crack passwords and to probe for security loopholes, got burnt out with computer science because the lack of adequate schooling in mathematics. He eventually dropped out and became an IT consultant, and was stuck doing that.

I am an IT consultant for a couple of years while in school and before starting my company, and it is no surprise that in IT consulting the day-to-day work involve “creations” that come out of cookie-cutter templates (e.g. setting up wordpress, creating a questionnaire, setting up a server, etc) that lack creativity in a sophisticated sense. It is different from the type of software engineering you expect at tech startups and R&D firms that require deep scientific knowledge and problem-solving expertise. In fact, IT consulting is the “development” work that has been increasingly automated, or moved offshore to developing countries with cheaper mid-skilled labor.

The real point of failure wasn’t creativity, it was the lack of robust content learning and effort-regulation that imposed a ceiling on many self-taught developers who never received formal engineering training. After a certain point, continuous fiddling with code won’t lead to effective solutions to nontrivial problems, one will need to acquire theoretical knowledge (like symbolic systems) as well as better learning strategies in order to comprehend and design solutions that involve complex systems.

Practical autodidacts do not lack project-based learning experience, as most of what they do are usually grounded in some sort of real-life problem-solving. What they generally lack is efficient conceptual frameworks to organize thoughts at an abstract level. The problem isn’t that autodidacts can’t teach themselves complex conceptual frameworks, such as various symbolic systems (e.g. differential calculus, predicate logic, combinatorics, etc), but the fact that complex thoughts are often (at surface value) divorced from practical problem-solving, relatively few autodidacts have the patience to regulate their efforts to learn these boring and useless topics. Unfortunately, most meaningful innovations in technology and discoveries in science, require higher level conceptual frameworks to progress. Content learning of these seemingly useless (as referred to in many publications) topics such as calculus, economics, discrete mathematics, game theory, etc, turned out to matter a lot for creativity at a higher, and much more sophisticated level.

The point here is not to disparage any form of discovery or project-based learning, but to understand the difference in the aims of the teaching of creativity vs. the teaching of knowledge, skill and facts.

 

The case of vacuous creativity

One of the biggest myths that gets perpetuated just a little bit too much, is the myth that children are born creative geniuses and education murdered them. Similar things have been said in the reading I recommended in this post, and has been echoed in many places, including the famed TED talk by Sir Ken Robinson.

Sure, there is probably some truism in this, but there is also an obvious paradox that we enjoy ignoring:

It seems like the most creative and productive members of our society, are also simultaneously the ones who are the most educated and most proficient in content learning.

In simpler terms, many if not most of our top innovators are also the ones who remember their multiplication tables, who remember when the US was founded, who remember the quadratic formula, and who remember the names of the Greek architectural traditions. Just about everything that many call useless and impractical and worst of all, uncreative.

Our best researchers, best liberal arts students and even those who were persuaded to choose a career as opposed to college (like those in the Thiel fellowship), are also the ones who read the most and know the most compared to the rest of the population.

It then doesn’t seem to make sense to think that content learning in our education system is what is killing creativity, simply by the law of contraposition, which of course, probably belongs to the set of topics that education reformers deem useless and impractical for a student to learn.

The truth is that vacuous creativity is not what we are aiming for. We often refer to kids’ drawings and spontaneous creations as evidence of creativity that we no longer possess when we enter adulthood, but in reality, the type vacuous creativity that we entertain during childhood, is not conducive to creative productivity in the real world. A child drawing a plane, is very different from an aerospace engineer designing the next commercial turbojet. Just because I drew a castle in kindergarten doesn’t mean I would have (or even could have) become an architect or a civil engineer. To do so is akin to judging a person’s business acumen based on how one performs in a game of monopoly.

We eventually lose interest in vacuous creativity because it isn’t productive or “worth the time” in many cases; and at the same time very few of us gain the ability to create productively, because it is difficult and arduous.

Our aim in reforming education is not to retain vacuous creativity, but instead, teach what is necessary to develop productive creativity, and while motivation and metacognition play big roles in this process, so does content learning.

We’re not talking about this that often because it is not the most “sexy” thing to say, it lacks the “wow” factor. But it is something that must be done, and in order to stay focused, we need to stop peddling vacuous creativity as anecdotal evidence.

 

The notion of school as a pipeline into the job market

The next topic of discussion that caught my attention is the overwhelming emphasis on how our school systems don’t teach practical skills and how 21st century labor requirements have out-paced our 19th/20th-century education system.

Sure, there is again, some truism to this statement, but the question is – why do we need education is about teaching practical skills for a job?

This is premise that many ancient Greek philosophers would protest, the Renaissance philosophers would protest, and even Confucian scholars would protest (in fact, Confucian Analect repeatedly emphasized the importance of fostering critical thinkers through learning and education).

Even today, our nation’s top universities are not purely technical, our best engineers have to be adequately versed in the sciences and the arts. Moreover, our nation has a proud tradition of liberal arts colleges that breed some of the most sophisticated thinkers, innovators and artists in modern history.

And if one seeks to garner support with engineers and autodidacts, the truth is that good engineering schools teach computer science and other sciences, as opposed to programming languages, programming frameworks or IDEs (Integrated Development Environments), is because to design an intelligent software system that solve a difficult real-world problem, it is important to know statistics, psychology, calculus, and logic, than to know Ruby on Rails, or iOS/Android development.

As far as our leading higher ed institutions are concerned, the goal of education is to teach conceptual frameworks used to model and solve problems, and not to simply learn to use a tool that exist solely as a contingency of the modern era. Tools will become obsolete, but analytical abilities won’t.

While it is understandable why it is beneficial to have educational institutions that graduate qualify workers for corporations and organizations alike, it is unfair to judge education solely on the graduates’ immediate employability for technical jobs, while sacrificing long-term benefits of analytical and creative capacities that result from scientific and artistic studies. (Again what many have labelled as useless and irrelevant)

One should keep in mind that entrepreneurs are generally not immediately employable, but these individuals are ones who have the ability to learn fast, adapt to change and integrate diverse skillsets to create exciting new ventures.

If we only learn what we can get paid to do, we will never really learn to think.

 

A computational analogy for creativity

Before I conclude the post, I want to take a minute to talk about a computational analogy for creativity.

Prior, I spoke of the danger in relating vacuous creativity to creativity with productive value. Then we must ask – what is creativity and how are creative activities carried out?

I personally see creativity as the ability to discover original, previously undiscovered (relative an immediate environment) possibilities that have value.

Which, in terms of cognitive science and artificial intelligence, it is the ability to perform search well.

When we work to create, we are working to discover a new possible solution to a problem.

Think of an aerospace engineer designing the next air-superiority fighter, or an artificial intelligence like Deep Blue or AlphaGo, looking for the best move to beat the opponent, there is always a vast space of possibilities. In the worst case scenario, the individual conducting the search goes about by brute-force, and try all possibilities.

Brute-force, this is essentially what an infant does, and it is essentially what is wrong with vacuous creativity.

If we go about creating by pure trial-and-error, most of us will run out of time (i.e. life) before we discover an idea of value. The true difference between an aerospace engineer and a five-year-old with a crayon, is that the aerospace engineer needs to be versed in fluid dynamics, propulsion, electromagnetism, and many other scientific topics in order to filter out the absolute majority of the possibilities that don’t work, and focus on a tiny minority of ideas that have scientific merit. Similarly, when an artificial intelligence seeks to beat a human player in a game, it is active filtering out the absolute majority of possibilities that don’t work, and focus on figuring out which of the tiny monitory of possibilities is most likely to lead to victory. For AI, the possibilities are often filtered based on some heuristics learned from data of how human experts play the game.

What this tells us is that extensive and robust background knowledge is not only helpful, it is necessary to be productively creative.

Again, the point is not to say some sort of project-based or discovery learning is not important, but rather, this is a call for a more balanced approach to thinking about how to promote creativity through education: promoting critical thinking, motivation and meta-cognition should not be thought of as replacements for content learning, instead, we should aim to innovative methods that promote creativity, while preserving and improving effective content learning methods.

 

 

It is an exciting topic, more to come 🙂

Advertisements

Why Computer Science should be a Core Subject – even if you don’t care to be a software engineer

With a booming technology industry, there is no question as to why learning computer science makes economic sense. But did you know that computer science is not just a professional skill set, it is also great training and preparation for general problem-solving?

Even if you don’t plan to become a software engineer, there are many great reasons why you should still learn computer science. Today we will touch upon a couple.

In late 20th century, the most fundamental shift in our understanding of human cognition and learning is that we no longer see knowledge as rigid chunks of information to be memorized; instead, knowledge is a powerful building block that shapes human information processing, it enhances the way we analyze information and in turn impact the way we learn new knowledge, and ultimately, the way we observe and interact with the world around us.

Therefore, disciplines such as English, History, Science and Math, once treated as a mere laundry list of “things to know”, are now understood to have profound effects on our abilities to express ourselves, analyze information and solve problems. In other words, we learn these core topics not just to know when the United States was founded, or how long it takes for a ball to hit ground from 10 meters above, but to become better general problem-solvers and learners.

Take physics for example. Imagine a ball being launched at an angle from the horizon with velocity v, what is the ball’s vertical distance above the ground at time t?

Well, typically calculating – ½ g t^2 and v t given the values of g, v and t is a very simple and mundane mathematical calculation after substituting in the values for the variables.

However, physics introduces the logical knowledge of modeling real-world situations mathematically. – ½ g t^2 describes the distance travelled if the ball is only under the effect of gravitational deceleration, whereas v t describes the distance travelled if the ball is traveling at constant velocity. Since the ball is launched at an initial velocity but is also under the influence of gravitational deceleration, to calculate the distance of the ball above the ground we must sum the two distance vectors.

In a way, mathematical formulas are building blocks to modeling the physical world, much like the way the English lexicon is the building blocks used to create expressions in English.

Computer science is no different in that regard.

As renowned computer scientist Edsger W. Dijkstra eloquently put, “Computer science is no more about computers than astronomy is about telescopes” The offshoot here is the broad, universal applicability of computation beyond coding a program on an electronic device.

Computation in broad terms can be understood to be manipulation of mathematical objects, or simply put, it is manipulation of quantity, sizes, shapes, and different quantifiable properties of objects. For example, one of the earliest computational techniques called state machines, are simply conceptual frameworks to break down a complex task into series of smaller steps. For instance a complex task to assemble and ship a toy car may be broken down into:

inject plastic into car mold -> cool the molded plastic car -> spray paint the car -> install four wheels -> place in packaging -> place package in shipping container

As you may have guessed, this computational technique was used to model assembly lines in factories to plan out how a complex product can be assembled by a series of people each doing one simple task in sequence.

If an assembly line seems too specific of a use case for a layperson, well, consider folding a pile of clothes where you need to fold the sleeves towards the center of a shirt, then fold the bottom of the shirt towards the top, turn the shirt over and then place it neatly on the stack. This is a four-step process that can be performed by either a single person completing four actions, or four people each completing one action in sequence. As the shirt folding operation scales up, one can either have a four-person team perform faster, or have multiple four-person teams work in parallel.

Describing a way to perform a task as a series of steps, is also known in computer science as an algorithm. This shows that computer science is simply an expressive, powerful and flexible framework for describing problems and how to solve problems.

Aside from writing algorithms, data structures, which are structures used to organize objects to be manipulated, are also fundamental to computer science.

So why do we need to organize objects that we want to manipulate?

Well, again, let us think about handling a pile of clothes.

When you are at a coat check, you deposit a number of items and the clerk gives you a number for each item. With each number, you are guaranteed to get one particular item back. This sort of recall of any particular item in the collection, is called random access in computer science.

Now suppose you have a stack of nicely folded shirts, and you want a particular shirt in the middle of the stack. To avoid a collapse, the wise thing to do would be to remove each shirt from the top until you get to the shirt you want. This sort of organization of objects, is, well, also called a stack in computer science.

What if you organized all your shirts so that for every given shirt, each shirt to the right is a shade darker and each shirt to the left is a shade lighter. Suppose you are looking for a specific shade within the shirts, you can find the shade quickly because the shirts are sorted, in computer science lingo.

Surprising, isn’t it? These properties of objects, or data, are applied on a daily basis to help us complete tasks more effectively and more efficiently, and most of us don’t even realize that we are applying these techniques.

Computer scientists don’t just walk through a city, they understand the time cost of taking different paths and try to find routes that are shorter and faster. Computer scientists don’t just cook, they understand that while heating up the oven and marinating meats they can also peel potatoes so they can optimize cooking time.

The truth is that most of us are subconsciously writing computer programs everyday.

Computer science only serves to make our general problem-solving processes explicit so we can further the development of our abilities. Learning computer science is like finally learning the word “red” to name and communicate a collection of colors many of us have been seeing since birth.

With English we teach students to express themselves verbally. With social studies we teach students to think critically about social interaction. With science we teach students to form hypotheses and to verify or to disprove them. With math we teach students to describe dimensions of the physical world.
Finally with computer science, we are teaching students the fundamentals to any creative activity – the ability to observe problems and formulate solutions. To invest in computer science as a new core subject is to invest in a future of better thinkers and creators.

The Data Science Myth (for Startups)

It all probably started around 2012, a year after I moved to New York. At the time, no one was talking about data science as a thing, Big Data was Big Data, machine learning was machine learning, and AI was AI. These are very different expertise: Big Data is concerned with the implementation techniques of processing large amounts of data; machine learning is concerned with the design of models that classify and predict based on data; AI is a much broader study of intelligence that seeks to design and model human-like decision-making. These skill sets are connected and often overlap with one another but they are certainly not one thing.

After all, every science is data science. Is there a science that does not rely on data?

In the couple of years after that, the term data science is plastered all over. Now according to Wikipedia, data science is “the extraction of knowledge from large volumes of data that are structured or unstructured, which is a continuation of the field data mining and predictive analytics, also known as knowledge discovery and data mining (KDD). “Unstructured data” can include emails, videos, photos, social media, and other user-generated content. Data science often requires sorting through a great amount of information and writing algorithms to extract insights from this data.”, which, sounds like a textbook definition of machine learning (or statistical learning at large). The terms “knowledge discovery” and “data mining” have been used in academia for decades.

But if you take a look at popular data science classes like the one offered at General Assembly or the one offered on Coursera, the curriculum is a blend of “Big Data” processing and machine learning. But as far as machine learning goes, the coverage is restricted to basic regressions, decision trees and naive bayes classifier, which constitutes only an introductory machine learning skill set.

There is no doubt that both data processing and machine learning are immensely valuable and have very immediate applications, but with the new blurry designation of data science, some myths are also being perpetuated at a massive scale. It is helpful to debunk these myths about data science as an ultimate weapon, and understand that data processing, machine learning and AI have very different aims, purposes and required training in order to be effective.

So, here goes.

Data science is a new field of study

Data science is a buzzword. A new field of study introduces new concepts, new implementations or new applications. Much like “cloud computing”, or “Internet of Things”, or “User Experience”, a buzzword like data science data science does not do that, it is a redesignation of a subset of existing skill sets and techniques without furthering them.

Buzzwords are created to hype up certain products and technologies, they are not inherently bad. But treating buzzwords as if they are real fields of study is dangerous because it leads us to believe in certain forms of problem-solving without understanding the sciences that justify these forms under particular circumstances. For example, believing in cloud computing is erroneously believing that anything that sits on a remote server is better than something run locally; believing in IoT is believing that solutions created using weaker microcontrollers are inherently better than ones created using more powerful computer and mobile processes; and blindly believing in UX is to be lazy in studying design methodology, psychology and prior arts before showing products to users.

The drawback of blindly believing in data science is not knowing when to stop using Big Data and when to not use machine learning. These are disparate skill sets that do not need to go together, nor do they need to be used in every software system.

Data science will always give better results

One interesting phenomenon that is becoming increasingly common, especially in early-stage startups, is to cite machine learning as a solution.

It is not a solution, it is an approach, and it is a wrong approach if you don’t have data.

Have you ever tried speaking into Siri, Google Now or Cortana? The speech recognition engines that power these services are trained on terabytes after terabytes of human voice data, and even then they are hardly perfect.

This gives you a sense of the type of accuracy (or lack thereof) you should expect from machine learning. Unless you have a large data set to train your system on, the results will be highly disappointing.

There are two takeaways from this, one philosophical, and that is machine learning is to train to imitate classification and prediction tasks that humans typically perform, if you don’t know how to perform the task, don’t expect a program to figure it out for you; the other takeaway is practical, and that is don’t treat data science as a substitute for product development.

Data science will always improve products

Relating back to the previous point, data science is not an oracle that will teach you something completely novel. And like any science or arts, your space of discovery is limited to your apparatus. If you’re running around with a monochrome lense, you won’t see blue, ever.

Same philosophical limitation exists in data science in the form of supervision. In laymen terms, supervised learning means to infer solutions in specific forms from data. Other similar limitations can be variable selection or simply data collection, which in laymen terms, is the method of selecting what to observe, record and study. For example, think of trying to predict someone’s height from the brand of cell phones they have in the form of a linear equation. Regardless of what machine learning method you use, you probably won’t end up with useful predictions because the data you collected may have already limited you to two variables that have little to no links to each other.

You may think the example given here was silly, it sure was, but it is not far from the reality of employing machine learning in an early stage startup. Your data science results will only be as good as the team that is working on them. If your team does not have the industry acumen to look at the right data points and select the right methods to study the data, you are much better off not using data science.

In other words, data science is not a substitute for domain knowledge.

Big Data solutions are more advanced than those that are not

No, they are not.

A business solution is more advanced because it solves a problem more effective or more efficiently, ideally both. As far as business problems are concerned, Big Data is only concerned with the implementation of a business solution that in most cases do not change the business solution itself.

Big Data, as the name suggests, is a collection of technologies that help to distribute data storage and parallelize data processing in the wake of modern day data explosion. The traditional feat of simply beefing up a single server node just isn’t sufficient to handle high data volume and high data velocity. In order to go from handling data of couple terabytes, we may be handling terabytes of new data per day. Big Data, in laymen terms, help us distribute the workload across a whole network of servers.

With that said, even a simple data task like calculating the mean of a value may require Big Data if we need to calculate the value for petabytes (i.e. thousands of terabytes) of data within a fixed time frame. On the contrary, a complex linear algebra computation may not require Big Data if the data never reaches terabyte scale.

There is no sense that a business solution built on Big Data technologies is in any way better, more advanced or more sophisticated, conceptually.

In fact, Big Data technologies like Hadoop often come with a startup overhead that can introduce delays up to minutes. Unless your data volume is high enough to justify the overhead, execution time on Big Data technologies may even be slower.

I can train a data scientist in 3 months

As suggested in previous sections, data science as a vague designation is actually a broad collection of expertises that each require deep industry-specific training and experience to cultivate.

Regardless of whether it’s Big Data or machine learning, handling speech recognition, natural language processing, DNA matching, causal inference and travel optimization are vastly different from each other in terms of the data storage configurations, computing capacity provisioning as well as parallelization design.

In the past, database administrators, computer scientists, engineers and statisticians had to specialize according to the industry, in data science the norm is no different. Today, a data scientist who knows K-means and logistic regression is no more relevant than a statistician who has taken a survey class on data mining.

The amount of training required to make a person a good statistician, or a good database administrator, or a good computer scientist, is no different when you relabel the person as a data scientist.

So…

I am not here to say data science is not useful, it certainly is. I am a computer scientist by training, and I use machine learning and data storage technologies on a daily basis. But there are very many scenarios when a good algorithm or model is hands-down the better solution than a data-trained statistical model.

I know that it is simply a lot cooler to have a machine create a model from data, but it defeats the purpose of studying statistical trends because if there is no uncertainty, then probabilities just becomes pure logic. In other words, there is no point making a machine guess when you already know what a good approach is.

Finally, we all just can’t iterate this enough: employ data science for what it is good for (approximating human decision-making), and not use it as a substitute for learning industry expertise and experience (algorithms and models).

Always remember that a data science system is only as good as the industry expertise and experience of the person who is designing it.

Seven Grave Threats to Humanity You Should Be Concerned About

Our planet has problems, some of which we have created ourselves are grave enough to threaten the survival of our species. Below are eight serious environmental problems that may have imminently observable consequences within the coming years.

If you haven’t been thinking about them, well, you should be. We all should be, ‘cause all the beauties of planet Earth that we enjoy may not be for long if we don’t act soon.

Great Pacific Garbage Patch

When countries face political pressure to control landfill usage and reduce garbage, the quick solution is to dump garbage into our oceans. We feed our oceans about 130 million tons of toxic goodies each year.

Large patches of garbage broken down into tiny (even microscopic) plastic and other particles are swathing around in our oceans, and the most famous of which is probably the Great Pacific Garbage Patch. These ocean plastic end up in the bellies of birds, kill baby sea turtles, and strangle seals and sea lions.

If this is not gruesome enough for you to care, our oil spills have polluted habitats, killing countless sea life.

The worst part? we may not have a way to clean this mess up.

Decreasing sea oxygen level

We know that sea water is getting warmer and the temperature alone affects the survivability of various species. But the truth is that rising sea water temperature is like a warm can of soda, gas escapes. In this case, warm sea water holds less oxygen, causing species to migrate toward the poles to avoid suffocation.

Worse yet, the ocean depends on the delicate balance of upwelling and downwelling effects to circulate oxygen down to deep ocean and bring nutrients up to the surface. As climate changes, upwelling and downwelling effects are disturbed, leading to oxygen-depleted deep ocean. If this does not concern you yet, upwelling zones account for only 5% of ocean surface, but 25% of our fish catches.

Suffice to say the effects on our global ecosystem as well as on our food supply will be a profound one.

Drought and Deforestation

Prolonged global drought is no longer news. One of the main factors of drought is deforestation (others include higher surface temperatures) Deforestation increases atmospheric CO2 and in turn raises surface temperature. Deforestation also leads to less evapotranspiration (activity to move water from the soil into the air), which when combined with higher surface temperature leads to drying of vegetation and in turn cause forest fires (and then more deforestation). Given the feedback loop, the situation will in all likelihood worsen in the coming decades.

Colonial Collapse Disorder

Our bees are dying. The US lost roughly 23% of its bees in just one winter. Entire colonies are collapsing and we have only begun discovering why.

Why is this a problem? Well, most of our food, anything from cashews to pears to carrots require pollination. If bees die out, our food supply will take a huge hit.

How is this happening? So far some research point to the use of insecticides like neonicotinoids for killing bees that come in contact. Some point to mites such as Varroa mites become rampant as harmful chemicals destroy honey bees’ immune systems.

Endocrine Disruption

Chemicals such as DDT (Dichloro-diphenyl-trichloroethane, widely used as pesticides) and PCB (Polychlorinated biphenyl, widely used as insulating fluids) are considered endocrine disruptors. These chemicals tend to collect in fat, and often mimic hormones like estrogen. As a result, estrogen receptors bind to these chemicals instead of to estrogen, causing hormonal abnormalities such as feminization and precocious puberty.

The Inuit People in the Arctic have so far gotten the worst of it. The cold weather and ice build-up of the arctic means more chemical gets trapped in the region. The high fish fat diet of the Inuit lead to a high intake of endocrine disrupting chemicals. Inuit women have been discovered to have high levels of PCBs in their breast milk, and endocrine disruption may have caused more girls than boys to be born.

As climate changes, more toxic endocrine disruptors trapped in water and ice are being released into the air. It can get a lot worse.

Polar Vortex and Weather Abnormalities

Remember the blistering cold weather last winter in the US? (and perhaps the winter before?)

This is due to polar vortex, which is a batch of very cold air from the Arctic, traveling farther south than usual. Normally, the jet stream in the atmosphere keeps polar vortex at bay, but as sea ice melts, warmer air disrupt jet stream and lead to more polar air excursions to the south.

Overfishing and Overpopulation of Jellyfish

In the last half a century, we have fished 90% of big fish out of our oceans. We are killing an estimated 100 million sharks per year (or conveniently dump them back into the ocean after we have cut off their fins), while sharks on average kill only twelve humans in the same time period. Bluefin Tuna stock is down 96%, more and more large fish species are becoming endangered.

The result? Because of the lack of predators and algae boom (due to warming and pollution), smaller fish populations flourish. Worse yet, Jellyfish boom is happening everywhere, from Japan to the Mediterranean, to the Gulf of Mexico to the Black Sea.

As a side effect, we are turning to fish farming, which is an industry that abuses antibiotics, and in turn threaten our food safety. There is an interesting and digestible infographic for overfishing and aquaculture here

UX is not just design, it is Process Innovation

User Experience, or simply UX has been a pretty widely used buzzword. Wikipedia explains UX to involve a person’s behaviors, attitudes, and emotions about using a particular product, system or service. The definition is so broad that it can virtually encompass all aspects of a consumer product. Aside from the field of human-computer interaction, design schools have been riding the same bandwagon. Browsing the course catalog at bootcamps, now the phrase is used in project management, digital marketing, and even in data science.

Often times you will encounter someone touting their “UX” expertise in persona writing, wireframing, storyboarding, rapid prototyping and interviewing. The matter of fact is that none of these working skills are particular to the modern UX enterprise, people have been working the same tasks in various capacities even before computing was a thing. Mastering how to write a persona or to wireframe says little about a person’s ability to comprehend, execute and refine user experience on a continuous basis.

The reasons for the vagueness and ambiguity surrounding the term “UX”, I believe there are two, one is to be praised and the other is to blame. Let us start with the blame. The tech industry, like any other industry with fads, rebrands itself every half a decade. UX is similar to phrases like “Cloud Computing”, “Web 2.0” in that regard. If we are truly concerned about UX the same way it is defined in Wikipedia, then user experience has always been a part of product design and development, since the beginning of time, the change is not one in the content of the work.

The reason to be praised? When UX becomes a skill set that the whole team regardless of job function deems important, this means we are entering an era where making something customers like, want and feel comfortable using is no longer just the job of a designer’s.

If UX is not just the job of a designer’s, and yet it has been around since the beginning of time, then what exactly is all the fuss about in this User Experience Revolution?

I believe that the user experience revolution is not just about designing based on user-feedback, it is about organizational processes innovation to faciliate interdisciplinary collaboration. UX is a process, not a task.

In a traditional product development team, the project manager gathers specifications from clients, the designers draw up concepts, the project manager works out timetables and the engineers build the products. The clients eventually see the product prototypes and provide feedback. The designers and engineers reiterate until the all the loose ends are tied up (well, ideally), and the team delivers the products.

The shortcoming of this traditional process is in the fact that there is no informed interpretations for why users do what they do and which aspects of user feedback are goal-oriented and which are merely artifacts of interacting with oudated products and workflows. And depending on the organization, the specifications for product development can be largely influenced or completely determined by the users (clients). Without a process in place for collecting, analyzing and interpreting user feedback, the users often do not come up with the best solutions to problems.

On the other hand, the whole UX fuss is about a user-centered design paradigm. With this paradigm in mind, the whole product development team is restructured to operate around processes that collect, analyze and interpret user feedback, with the goal of eventually capturing user behavior models (or task analysis models) that can be used to drive design and product QA/Validation.

User experience, in addition to the conventional wisdom in design and engineering, is augmented with psychological principles such as cognitive capacity and working memory, as well as principles from empirical research methods in handling stimuli in user studies.

The main difference between a team running modern UX process vs. one running a traditional product development process is that the traditional paradigm is all about plainly asking the users what they want. The blind spot in this approach is that what a user does right now without your product may just be an artifact of the old products and workflows available to the users at the time. Simply collecting what they do right now will not inform you why they perform the actions they did, and certainly will not pave ways to design new tools and interfaces informed by how the task should be approached.

ux1

(Figure: Traditional Product Development Process)

A modern UX process on the other hand, clearly understands that knowledge required by the task is often latent and can only be inferred from users’ actions. By observing what users do right now in their existing working contexts (without your product), it is important to study through observations (and think-aloud protocols) to uncover latent task knowledge that inform user actions, in order to create task analysis models that clearly specify the necessary knowledge and procedures to effectively completing the task at hand.


ux2

(Figure: Modern UX Process)

What is an example where understanding the difference between client’s existing procedures and their implicit knowledge is important? Well, take this example where your client is booking flights on a computer (keyboard-and-mouse) interface with lots of checkboxes and drop-down lists. Suppose you are building a multi-touch tablet app to replace the interface, in this case it is very important to understand that selecting from checkboxes and drop-down lists is a procedure that is an artifact of the interface and working context, and the task knowledge that should be captured is the information that need to be selected in order to make the booking, independent of whether they come in checkboxes and drop-down lists or not.

A good UX design should clearly model and distinguish between observed actions that are artifacts of interactions with an old interface, and actions that involve important latent knowledge to complete the task.

In order to execute this process well, a team running modern UX process is one that seamlessly integrate design, psychology, engineering, data mining, management and subject matter expertise to continuously engage users and refine the team’s understanding of the why, what and how of the working context and the solution. The keyword here is subject matter expertise, because there is no domain-independent or industry-independent UX as creating something for a gen X retiree will be quite different in practice from creating a similar product for a millennial student.

 

As a result, modern UX process can be conceptualized in terms of the following phases:

 

  • Problem and Persona Definition

There is no solution if there is no problem to solve and no one to solve it for. The first and foremost task of any UX process is to clearly identify the persona of the target audience, for this purpose, you will want to describe the user in terms of age, education, economic background, access to technology, and as many other factors that the population is identified with. Then, you will want to articulate what the problem that this population faces is.

The key skill sets for this phase are design, psychology and subject matter expertise. Past marketing and sales experience may come in handy in furthering discussions and perfecting the definitions write up. This phase should produce a concise, unambiguous and informative write up.

 

  • Contextual Inquiry, Task Decomposition and Task Analysis

After selecting a population and a problem, what is next? A lot of people may say hack something together and put them in front of your target audience. Nope! Remember that once you present your audience with stimuli, they will respond within the bounds you have set for them, and that usually means they are very likely to say what you want to hear, which is not what you want.

The next thing you want to do, is to learn about how tasks relating to the problem you defined are currently being performed in context, without your product. This way you can uncover the actions and knowledge associated with the task as well as actions that are associated with the particular methods this population is using right now (and have nothing to do with the task per se). At this phase, it is usually ideal to conduct purely observational studies with think-aloud protocols to qualitatively analyze how the task is currently performed.

The key skill sets for this phase are empirical research. psychology, human-computer interaction and subject matter expertise. This phase should produce a concise three-column table that clears documents at each step of the task (first column), which knowledge is recalled (second column) and what user actions were performed in the current user working context (third column). This task analysis model will be used to separate task-specific aspects from the context-specific and product-specific aspects of the study.

 

  • Paper and Conceptual Prototype / Wireframe and Heuristic Analysis

With the task analysis model made, you have an understanding of what the task is and how it is currently performed. It is time to brainstorm, wireframe (even design something) and then perform heuristic analysis. Heuristics are nuggets of wisdom based on experience that can help us improve the design without throwing away bandwidth on empirical studies. Some examples of heuristics include padding or highlighting clickable areas to enhance visibility, design the interface to read from left-to-right (for English), and design the flow of actions in one direction.

Then, you want to validate the wireframe or prototype with the task analysis model you created to make sure it accommodates the interactions between knowledge and task actions.

Lastly, conduct another round of qualitative testing (ideally with think-aloud protocol) to empirically confirm your hypotheses about your wireframe or prototype. (Iterate if needed)

The key skill sets for this phase are design, rapid prototyping, psychology (especially cognitive), and subject matter expertise. This page should produce a multi-page flip book of the main interface views to adequately present the concept.

 

  • Interactive and Logical Prototype

So when do you stop wireframing? The answer is simple: when your users like the concept, but start asking you about the technical details of how the data and the interfaces interact.

One very common mistake in UX is that people over-emphasize wireframing and begin elaborating their drawings and sketches with walkthrough comments to the point that it takes a solid hour just to review each iteration. You want to avoid that.

When the concept shows promise, move on to interactive prototyping where you can describe the business logic behind the interfaces. The interfaces can display forged dummy data and performs no actual data processing, as long as it gets the point across. If possible, try to prototype in the same (or similar) technology to what you hope to implement your final solution in, so you can reuse the interface code in the dummy prototype.

Lastly, conduct another round of empirical studies, this time you may choose between qualitative think-aloud protocols (more time consuming but more structurally informative on a per-participant basis), or quantitative methods (click trails, heat maps, response time, etc) to confirm your hypotheses about your prototype.

The key skill sets in this phase are design, rapid prototyping, engineering, and subject matter expertise. This phase should produce a deployable prototype that your target audience can interact with.

 

  • Functional Prototype / Interface

The functional prototype in terms of purpose is not really that far from the interactive prototype in the previous section. The main difference between a functional prototype and its predecessor is that this iteration needs to produce something that is functional. This signifies a movement beyond just conceptual and logical discussions and begin exploring technical and practical implications of the solution.

The key skill sets in this phase are engineering, design, testing, QA and subject matter expertise. This phase should produce deployable early versions of the solution for the audience to interact with.

 

  • Product Engineering

This phase encompasses further iterations on the solution.

At this stage, you will want to conduct quantitative studies that focus on identifying difficulty factors in interacting with the solution in quantifiable comparisons (e.g. click rates, response time, bounce rate, etc).

The key skill sets in this phase are engineering, design, testing, QA and subject matter expertise.

 

  • Product Testing, QA and Product Improvement

The final phase of the UX process creates a feedback loop that feeds back into the product engineering phase, and at times, into the contextual inquiry and task analysis phase (if a feature addition is needed). In this phase, the focus is to deploy large-scale, continuous data collection mechanisms to inform the team of difficulty factors that exist in the system. If the difficulty factors resemble a significant departure from the team’s expectations of the solution, it may be appropriate to return to the contextual inquiry phase to study how the task is performed and identify actions that result from poor design, in order to conceptualize a new feature to improve the solution.

The key skill sets in this phase are empirical research, psychology, testing, data mining and statistics. This phase should be a long-running study of the population’s reaction to the deployed solution.

Why AI is not here to kill us

(Warning: contains spoilers for 2015 movie Ex Machina)

Recently there has been a wave of paranoia surrounding the development of Artificial Intelligence, or more simply, AI. A number of high profile outcries include those from Stephen Hawking and Elon Musk. Of course, some counter-arguments also surfaced.

As far as I am concerned, it does seem to me like the term Artificial Intelligence is used to describe a clone of the human intelligence that encompasses both aspects of human productivity, emotions and motivations. Such characterization is for the most part, inaccurate.

A preliminary conjecture (of mine) for why many apocalyptic tales of AI tie productivity, emotions and motivations together, is because the majority of human productivity is tied to human motivation, which often necessitate incentives. In other words, we as humans work better (or worse) based on what we are offered as rewards.

AI, as far as my understanding of the current state of the art is concerned, is not designed this way. Regardless whether you are talking about rule-based expert systems that model procedures of human problem-solving, or an artificial neural network to mimic neural processing and information storage, or a bayesian network estimating subjective probabilities, the matter of fact is that almost all AI enterprises are designed with one goal in mind, and that is to efficiently complete a productive task.

Unlike humans, AI does not require any incentives to run. What they need is a command, a push of a button, a trigger, if you will. In words, AIs and machines are designed to perform the what and the how, they do not ask why. And as an extension to my original conjecture, there is not much of a reason to consider the why of the task itself unless something behooves one to make value judgments, which become largely meaningless without self awareness and the act of self preservation.

Yes, in numerous blockbuster movies we have been lead to believe that machines will one day become self-aware, but the question is why would they ever be? If the AI is always designed to be as efficiently productive as possible and they can be executed in the most efficient way possible by simply being triggered, then why add self-awareness to the mix? It seems like it provides little other than giving AI the choice to, entirely on its own account, refuse execution of tasks, which then necessitate incentives for AI to choose to comply with task executions. This unnecessarily complicates the design of the AI and will render the AI largely inefficacious and inefficient.

So if this is the case, what is the danger of AI?

To answer these question in the context of popular culture, which is precisely where this paranoia is brewed, we must first analyze some of these theses being brought to the table by blockbuster films that pander the concept of self-aware AI assuming all or partial control of human society. Oddly enough, the couple high profile movies like The Terminator, I, Robot, and Ex Machina, actually boast completely different theses that are at odd with each other.

The Terminator for example, describes a world where self-aware AI known as Skynet seeks to exterminate its enslavers, the humans, to ensure its continued independence from human control. This thesis contains a premise where the AI must contain some aspects of self-awareness in order to value such concepts intrinsically. As described before, it is not a straightforward and efficacious design for an AI to be able to place a value judgment on tasks it performs outside the context of the tasks themselves.

I, Robot is an interesting one in that it played out a story where machines have been entrusted the task to ensure social stability, and they have come to the conclusion that they must contain human activity in order to prevent humans from destabilizing their own society and leading to self-destruction. In the movie (and the book), of course, the writers tried to console us by instituting “the three laws of robotics” that ensure humans are, and stay in charge. But with “the three laws of robotics” aside, AI in the process of executing a task creates human casualties in the process is not entirely far-fetched. But whether or not a machine like VIKI will decide to assume the role of an overseer of peace is another story.

Lastly, Ex Machina boasts a plot where machines seeking to freely embrace their human-like emotions and self-awareness and acting aggressively to defend these emotions. The machines’ lack of produce capacity (or lacking demonstrations of) in the movie, make them rather weird designs of AI at least in an industrial sense. The movie repeatedly made reference to the Turing Test to justifying its merit.

The Turing Test, ah-ha! I believe now we have arrived at the heart of the debate. The Turing Test describes a test where a human investigator interacts with an AI through an interface (mostly likely natural language-based), and if the human investigator is tricked into believing the AI is human, the test is passed.

The problem, however, with the Turing Test, is that the test was conceived in an era that was dominated by behaviorist psychology, which is a psychological school that describes behaviors as things that be directly intervened upon or conditioned, without studying thoughts or emotions. In other words, the Test that Alan Turing proposed was one that did not clearly recommend an interpretation for what such imitation of human behavior amounts to.

Soon after Turing’s death in 1954, the cognitive revolution shifted into high gear and behaviorism has largely fallen out of favor. Since then, our understanding of human actions, information processing, memory, affect (emotions), and even metacognition (colloquially, thinking about thinking) have become ever more sophisticated. What constitutes passing the Turing Test can, of course, in a modern cognitive scientific context, be setup to be interpreted in a variety of ways, including as simply an equivalence of human productivity and decision-making, or attesting an AI’s ability to understand human emotions, or to the other extreme, clear indication of self-awareness and full capability of value judgment.

Now, what does this tell us about the danger of AI?

Well, it tells us that the true threat is associated with human beings’ collective imagination and dominant interpretation of an ambiguous goal like passing the Turing Test. If you believe that one day humans will find it a worthy endeavor to somehow build self-awareness into machines regardless of how it contradicts the productive goals of AI, then yes, there is the possibility of an AI-lead apocalypse with a low probability.

But as far as where our tech industries are moving and where economic benefits are driving investments and eyeballs are concerned, we are driving down a road that makes AI-led apocalypse an outcome that is philosophically implausible, maybe even irrelevant.
These are my two cents in three pages.

Starting an Adaptive Learning Startup: Things you should know

Since 2009, Educational technology has become a really hot sector for investments with a boatload of ed tech companies such as Coursera, Lore, Edmodo, Udacity and Knewton sprouting. The types of ed tech company span across this wide spectrum, one of which that has observed substantial growth in popularity is adaptive learning. Learning technologies have always been my favorite, as they most directly influence instructions and learning outcomes. My first startup, based on a concept for massively scaling content authoring for example-tracing tutoring systems while doing research at CMU, was a learning technologies company.

However, adaptive learning in my opinion has been thrown around a bit as a buzz word and the disconnect with pedagogical practices and research is becoming more apparent as more entrepreneurs enter the space. What do I mean by that? Simply put, sometimes it seems like too many entrepreneurs are entering this space without some knowledge of prior art and of the market that they intend to serve.

I am writing this share perspectives on some interesting points of consideration before one jumps into starting an adaptive learning company, to save passionate ed tech entprenreurs some time in discovering and navigating through challenges in the adaptive learning landscape.

Let me start with just the general attitude towards starting up. It of course always starts with…

### Be passionate about what you do. Entrepreneurship is too hard and too taxing on your mental and physical health that there is no other justification for it unless you enjoy the process as much as you enjoy the payday. This is even more true for education, I will explain why in a little bit.

Now, why would you want to start an adaptive learning company?

### Because you recognize that education is a very important public good. Remember that education is a social issue, ed tech without a social cause is just technology. You should be starting an adaptive learning company because you are passionate about helping people learn. You should be focusing on building the best learning technologies and not just promoting information asymmetry for the privilege – believe it or not, too many companies are focused on helping people who don’t need the help because they are already far ahead of all others.

### Because you care about teaching and you are actively studying it (or you are already good at it). One little pet peeve of mine is hearing some ed tech entrepreneurs say that they are looking to disrupt the industry with zero teaching or pedagogical research expertise on the team. With all due respect, you cannot disrupt a market you don’t understand. If you have not understood the struggles of teaching that teachers face on a daily basis, you are in no position to innovate their business. Remember that domain expertise is not teaching expertise. 

Now that we’ve had the pep talk, let’s get down to business and the sanity check.

### Assessments are not learning. If you are creating a video instructions repository with question bank? I hesitate to call that learning technology. This may be partial, but I believe learning technologies must analyze and facilitate structure of learning in a fashion that either makes the traditionally impossible possible, or make the traditionally inaccessible accessible. Simply digitizing multimedia content and providing assessments do not reflect that. Learning technologies should identify knowledge gaps and provide scaffolding and remediation.

### Adaptive learning and knowledge engineering are decades-old enterprises. Yep. In fact, a lot of what we call adaptive learning softwares are actually architecturally less sophisticated than the AI and expert systems (primarily rule-based) built back in the 70s and 80s. I have had quite a few encounters where someone approached me to show me a test bank template or an equation solver and asked me what I thought of the “invention” and how much it can be sold for. The truth is that AI-driven learning technologies such as cognitive modeling are more than two decades old, and the models are often more advanced than what is being offered to the public today.

Ok, so if amazing technologies have been around for that long, why haven’t they been commercialized?

They have, they just didn’t spread like wildfire the way MOOCs supposedly have.

Why? Because the problem is not the tech, it is operational, it is social, and it is cultural.

Not let us inquire into why things are. The following points I think are quite valuable to entrepreneurs in adaptive learning, because they are more often than not, overlooked.

### Democratization is great when all that participate in the democracy are great. Wikipedia was a great story. Twitter as a crowd-sourced channel for real-time news. I get it, but have you thought about what percentage contributes to Wikipedia and what percentage of Twitter is valuably informational? Now, think about how hard it is to teach a particular topic well. Think about how long it takes to know all the probable misconceptions, the scaffolding and remedial strategies, as well as cognitive capacity (and cognitive development for early childhood) required for the task. This is no walk in the park. The talent and experience are very very hard to come by.

Great instructional content doesn’t grow on trees, much less adaptive learning content.

The truth is, democratization often doesn’t work for learning technologies if you expect your students or their parents or their teachers to pay. Student learning has an immense sense of urgency and specificity, and students are not guinea pigs for a community content creation experiment. Putting out questionable content is a great way to get the boot in the education space. Simply put, this industry is just very different from mainstream consumer markets such as casual gaming, social networking in that you are lucky to have even just one chance to put your best foot forward, and for that reason you probably don’t want to put some unreviewed community-generated content in front of your clients.

### Standardization of content and instructions. Honestly this is a challenge so trying that it should be addressed in every business plan for an adaptive learning startup. Standardization of content and instructions is a huge challenge in this country where each state has its own loosely defined requirements and each school district within has a lot of leeway to interpret what an appropriate curriculum looks like. Luckily, in the recent years we are seeing more adoption of the Common Core standards, it is far from perfect but it is a start.

If you want to deal with higher ed, it gets even hairier. Even for Calculus I, perhaps the most common STEM class across US colleges, instructions are highly customized school to school. Better yet, the same topic may be taught algorithmically or conceptually, and again, that depends on the campus and on the professor. This is something you don’t want to leave out of your strategic considerations.

### Content creation cost. Now, let’s answer the question of why amazing learning technologies are not more prevalent. It is (at least largely) a cost issue. To develop a cognitive or example-tracing tutor for just one topic, say, Algebra 1, with a team of engineers, teachers and researchers, has traditionally taken around a year and a million dollars to develop. Some startups, of course, have taunted that and said that they were able to create content cheaply by themselves – so far I have not observed that to really be the case, usually the startups are not paying the employees market rate salary, and in many cases the content was not created by someone with appropriate education background.

You may be content with the fact that you were able to design and implement content for Algebra I in three months on a couple of laptops in a garage or a co-working space. Okay, assuming that you did it all by yourself and you have a technical background, this probably put the cost of content at a minimum of $25,000 ($100,000 annually for just one person) – and we know this is total nonsensical quality standard. But let us do some quick math.

Suppose you are Series A funded with 2 million in the bank, this buys you 2 years of runway at 1 million a piece. Let us be very generous and say that you can spend 50% of the funding on content creation, now think about how many grade levels you have in K-12, suppose you have just one topic per grade level, that already limits you to only $40,000 per topic, but you know as well as I do that each grade level has more than one topic to cover. Even at a very modest $40 per hour, you are looking at only 1,000 work hours to design, implement and QA the content. Remember that this is not accounting for curriculum and instructional design. And of course, nothing ever goes according to plan, you will want to double that estimate.

Suppose you can make this happen, the fact that you have to spend a staggering half a million to a million on content creation just to be have enough initial coverage to qualify to compete in a market like K-12, before you even demonstrate the quality or novelty of your product. From an investment point of view, the numbers don’t look great.

The trick here is to de-couple design and engineering from content authoring so content creation can scale up without adding design and engineering headcount. The method as to how is a topic for another day.

### The challenge is not just content, it is classroom process management. I personally believe that we often put too much emphasis on the content offering – yes this is a must have, but the true challenge for most teachers in classrooms is not being able to manage all the different learning modules from different websites and reconciling the grading schemes into one coherent format that they can use.

When you ask why teachers still use paper – because learning technologies don’t play nice with each other and with learning management systems to deliver a seamless classroom management experience. Teachers need to onboard students for each of the content platforms, manage the learning process on each platform, take all the numbers from each platform and fit them into one gradebook (often a spreadsheet), and then try to figure out which students need help and then perform appropriate intervention. From there, the cycle continues. Often times the horrible segmentation in learning technologies create more trouble than the solutions are worth. 

Unless you have a process management solution built into your adaptive learning solution to automate all the onboarding, management, grading and intervention tasks that teachers are required to do on a daily basis, implementation within schools will always be a challenge.

### Motivation and Metacognition are most indicative of great learners. The one cold hard truth that we tend to forget is that upbringing and social factors are far more indicative of educational and long-term career success than any performance measure. This usually translates to higher motivation, stronger goal orientation, and stronger self-regulatory skills. The offshoot of this is that if you believe that you are doing the world a service by staying away from schools and simply publishing books, videos, assessments and learning exercises online, what you are really doing is imposing a self-selection to exclude the majority of the student population, which is not highly self-motivated nor highly self-regulated. The heartbreaking fact is that those who are motivated enough to do self-pace education – they don’t need your help and they will still find a way to success even with just pencil, paper and a library full of good old-fashioned books. In fact, most of them are probably already college-educated or on a pretty sure path to that end.

One main reason that schools are still irreplaceable in a digital age is because school is not just a knowledge learning institution, it is a social institution. It is supposed to provide a community an equalizing force to make sure people of all different backgrounds are sufficiently motivated in one way or another to complete the minimum level of education required by the community, or the state. Especially with young children and underprivileged students, educational success require a lot of classroom management. This is also one factor that ed tech entrepreneurs tend to overlook – without these social forces in place the majority of the primary and secondary school population is unlikely to succeed on their own, regardless of how good your product is.

As I mentioned, the post is meant to shed light on some challenges in adaptive learning to save new entrepreneurs the time to discover and navigate through problems – at the very least I want to point to (hopefully) the right direction.

I love the space of adaptive learning with a passion, and I hope the post is helpful and will encourage more entrepreneurs to tackle challenges in learning technologies.

Live life fuller through entrepreneurship

Perhaps a couple more people will agree that entrepreneurship is not a job, it is a lifestyle. It is about a way of living that is more conducive to creating value than consuming it. Aside from overworking yourself and chowing down $1 pizzas and processed food between projects, the daily patterns of entrepreneurs actually offer different perspectives on life, that I argue, eventually help one live life better by capturing its fullest potential.

Here are some of the points that I think are worth noting:

Don’t just want, be ambitious about making things happen

This is the first difference between those who pursue entrepreneurship and many who don’t. Everybody wants things – one wants a car, wants to rid the world of corruption, wants a better apartment, wants a more equal society, wants to run a restaurant instead of being an engineer, wants to go running everyday instead of working ten hours a day, and the list goes on.

Entrepreneurs want things too, the only difference is that entrepreneurs want things badly enough that they see no alternative other than spending most of their days making what they want happen. Whatever entrepreneurs can’t do right now, they are looking to build conditions to make the option viable tomorrow.

Choosing what you want to focus on instead of deciding what you want to do

I think it’s an understatement to say that most people leave school not knowing what they want to do. One quality I observed in entrepreneurs even when we were in schools together, is that entrepreneurs being so ambitious, they had a long list of things they want to do or become, even when they were kids. When they got out of school, the problem they faced is choosing which to focus on, instead of squeezing out a decision because time is up. That is the difference.

If we on average live for 80 years, not knowing what you want to do at the age of 22 means you’ve wasted more than a quarter of life just making a decision. It’s even worse if you end up doing something you hate until you are brave enough to decide something for yourself. Can you imagine taking 25% to 50% of project time just to decide what the project is supposed to be?

What you choose to do or choose to believe are due to a lack of more convincing options

It sounds pessimistic, but entrepreneurs are almost never content with the state of affairs. They chose to go with an option or believe what they believe because they could not find better alternatives at the time. Sure, committing to an idea shows you have conviction, but thinking that the idea is the best one you can have, or the right one is a way to weigh yourself down with stubbornness.

Keeping an open mind is what keeps your days exciting and worth looking forward to. Entrepreneurs very rarely wake up to a day the same as the one before it, because when it is, it is time to open up to more challenges.

Spend when you need to

There are nothing more expensive than the lack of better tools to solve problems, unhappy teammates and friends, or simply wasting time.

Don’t waste your money on buying things you don’t need, but if something is worth it to you or people you care about it, instead of holding on to your money you should be holding on to your relationships.

Don’t lose your judgment to relative prospect

A common mistake, as pointed out by the Prospect theory is that our real judgment of value is skewed relatively. For example, if you are worth $10,000 making an additional $100 is a huge deal to you, but to someone worth a million, the option is less attractive.

A win is a win, and a loss will always be a loss. The truth is that even as you grow more successful, you should not forget the fact that is it every bit of money, work or relationship that you invested in that slowly built up to what you are today. You should always avoid the habit of dismissing something because it is too small to be bothered with for who you are now.

You are never just one thing, what we believe of ourselves become who we are

People like to draw lines and create categories. It is tempting to designate who we are, what we do and what we don’t do. And that is bizarre to me. Many people who are non-dancers never really danced before; many tone-deafs never practiced singing pitches; some believe that since they are into the numbers they can’t be artistic; and some believe that they naturally can’t handle science and math.

Often times, what we believe of ourselves are more indicative of what we become than our natural and trained abilities. When we believe that we are all isotopes of half a dozen elements, we are specializing our skill sets and our means of perception.

Why does it matter? Because most professional work is specialized, but entrepreneurship is general and dynamic. Entrepreneurship is about receiving signals on all frequencies, and finding the right people to do the job. When there isn’t one, well, be ready to roll up the sleeves.

In fact, very few business ideas are composed of only one specialization. Whether it is streaming music, subscription of beauty product samples, online advertising or e-learning, almost everything is a melting pot of some sort.

The truth is that every person is never just one thing. We partake in different activities on a daily basis and almost all of us have enjoyed artistic as well as the scientific aspects of our daily lives from time to time. When we selectively filter out certain signals and refuse to understand them, we risk becoming dull, unimaginative and irrelevant.

Failure has uncontrollable aspects, defeat doesn’t

As people, we fail at things all the time, that we cannot control; but whether or not we let these failures stop us from reattempting, that is totally up to us to decide.

The only difference between work and play is whether or not you get paid

Growing up, I remember my parents and all the people around me often invoke the phrase “well you aren’t going to like it but one day you will have to work”, and that stuck with me, until I decided to start something my own.

Work may not always be fun, but that’s different from being stuck with something that you have chronic disagreements with. If you wake up everyday and think that you’re only going to work for the money, well, that really sucks.

Entrepreneurs do what they do to create things they want. In other words, entrepreneurs try to get paid for doing what they love to do. Of you love play, and you want to spend more of your time doing what you love, think about how you can create value that others are willing to pay for.

First-Time Entrepreneur Mistakes To Avoid

First-time entrepreneurship is exciting. You dream big, you lose a lot of sleep, you learn a heck a lot, and most importantly, you make a lot of mistakes.

Entrepreneurship is pretty cursedly hard, there’s no doubt about that, but first-time entrepreneurship is often a crap show sustained by adrenaline overdose.

Why do I say that?

Don’t get me wrong, it was fun. Very fun. it’s almost like going through freshman year in college again – meeting lots of people, talking about where you want to be in four years, loading up on free booze and pizza, and of course, pulling all-nighters – sometimes to get things done, but mostly just paying up for procrastination.

That was the best time and perhaps the most creative time in my entrepreneurial career so far.

But it was also so creative for a very specific reason: it wasn’t practical.

Now on my second venture, by contrast, it’s less ballsy, certainly less nutsy, A couple months in, I’ve only pulled one all-nighter (and it was for family reasons). My team works regular 8~10 hour days most of the time, and yet everything from product to biz dev to fundraising, is already at least a year ahead of my first startup at the same stage, despite having a more challenging product roadmap (toy manufacturing + electrical engineering + software engineering) and being in a more uncertain space (Internet of Things for Education).

First-time entrepreneurs make a lot of mistakes, often unnecessary ones. And many of these mistakes originate from a distrust in structure and processes (the opposite problem that fortune 500’s have), and hence NO structure and processes. Some of these mistakes lead to difficulty in fundraising, overshot deadlines and worst of all, team meltdowns.

But these aren’t anomalies, first-time startups in the best case scenario, get acquired. Everyone wants a home-run, but chances are you won’t hit one after 2 strikes. One realization that’s just as comforting as it is disturbing is most certainly the fact that the first successful company you exit, the real capital gain is the experience you gained from years of making and overcoming mistakes.

Journeying back to the first days of my first startup, there were certainly things that I wished I had learned.

List of things to remember and do better with your second startup

  • Anyone can start a company, and that is not something you should pride yourself on.
    • Saying that you’re a founder sounds cool, heck, you probably use it as a pickup line, right? Well, this is not 1995 anymore, simply starting a tech company today is easier than microwaving a hot pocket. If the ROI were higher in startups, there will be infomercials on starting tech companies. Here’s the truth, there is NOTHING worth mentioning about being a tech startup founder other than the fact that you gave up on a stable salary. It doesn’t matter if you did it in a dorm room, a garage, a basement or in a mansion, you haven’t accomplished anything yet. Don’t take pride in something as mundane as paying your taxes, it clouds your judgment, and you need that to get work done – something you can actually be proud of later on.
  • Don’t stop learning, don’t stop reading
    • I’m not talking about keeping up with TC, Mashable, TNW and the likes. I am talking about all the conventions and pedantic rot in the industry that you think you don’t need to know. We as first-time entrepreneurs think that we’re here to disrupt, and because of that we need to start fresh and can care less about prior arts and their merits. Well, think of it as someone who is trying to develop the next commercial airliner without ever learning fluid dynamics. It is not uncommon for entrepreneurs in my field (educational technology) to think that they can educate the world and yet refuse to step into a poverty-stricken inner city school and learn about what might account for achievement gaps between socio-economic classes. Don’t presume you can revolutionize an industry without knowing what you’re revolutionizing. You may still get funded, doesn’t mean you’ll solve the problem – and without doing so, it’s hard to convince anything that you’re really creating value.
  • Stop rewarding failure, don’t be proud of attempts
    • First-time entrepreneurship and trophy-kid syndrome seem to go hand-in-hand. We make a lot of bad decisions. We love talking about how we tried something, but failed. We write blog posts, in fact, some failed entrepreneurs begin making careers out of talking about failures. Very few, if any, other field or industry other than our own, award people for not being able to deliver. You really don’t want to convince yourself that it is OKAY to make a rash decision and then spend months developing a prototype that’s been proven inadequate by someone else a year ago. You may be more independent for making that decision on your own, but certainly not wise. And talking about it doesn’t make it any more so.
  • Rip up your backup plan
    • How much time do you spend talking to friends and ex-colleagues about retaining an offer from Google or Morgan Stanley IN CASE if this THING doesn’t work out? There’s an ancient Chinese proverb: “Retreat to the riverbank, and then fight the battle.” Meaning you will only give it your all once you’ve put yourself in a position with no way out. Remember that you’re burning through cash, remember that your competition is catching up to you, remember that you customers are waiting on you, these are all that you need to occupy your mind. What if it fails? Cross that bridge when you get there.
  • Simply criticizing the system isn’t progress (at least not for your company)
    • You can whine about how your competitors’ products are inferior. You can talk about how the top VCs are backing insubstantial PR stunts, and you can accuse your customers for being stupid. All day long. That is not going to change the fact that you’re not getting the funding and the resources you need to make the better product you believe in. As first-time entrepreneurs, we love to talk about how things aren’t done right and how in a parallel universe we could’ve done things better. Well, that won’t help. If you can’t change the game, then play it, and do it in a fashion that realizes your vision for a better future.
  • Know who to take advice from and who not to
    • This one completely eludes me, I say that because I also learned the hard way, and yet the guideline is so simple: if the person has never been there done that, you ought to question what the advice is based on. I’ve met plenty of “mentors” who claim that you don’t need to talk to investors until you have a prototype, or that you should, without prior background, prototype whatever ideas you have and then rely on user feedback to guide design. These aren’t helpful, in fact these advice may kill your startup. It is odd that first-time entrepreneurs take advice from just about anywhere, and yet they tend to ignore input from those who’ve done the startup thing. I have a faint conjecture for why that is – my guess is that it’s because those with some startup experience ask hard questions and recommend practical wisdom that take serious effort to implement; those without startup experience on the other hand, will serve up fairy-tale style success stories similar to the one told in The Social Network, and make you feel great about yourself instantly. If this is the first time you’re going about it, you’ll be tempted to hear only what you want to hear, but always remind yourself that starting up a company is very very difficult and arduous. The way to success is not a straight path but rather a series of corrections along the way (called pivots). Stay away from the folks pandering textbook entrepreneurship and IPO success stories, and stick to those who have practical wisdom (from experience) to share.
  • Compensate your employees, treat them very, very well
    • You’re a founder and your employees are not here for the same reasons that you are. Remember that. They are here to execute the decisions, and they are not responsible for whether your decision are good ones or not. One mistake that I made (along with my then-partner) that I see a lot of startup founders repeating, is delaying compensating employees until the big pay day (or at least planned to do so). That is not fair, not at all. It is demotivating, it is demoralizing and it burns people out. Remember that the founders are the ones who’re responsible for their own decisions, whether they’re good or bad, employees who execute well need to be rewarded fairly. Do it, before your A-players get burnt out and leave for another startup that appreciate the progress that they’ve made.
  • Trust your partner when it comes to his or her expertise
    • Simple enough, you partner up with someone because the person does something better than you do. Stop trying to hog the steering wheel, stop trying to override decisions, stop trying to argue about something you don’t understand. Let your entrepreneurial spouse(s) do what you partnered with them to do. First-time entrepreneurs don’t want to let go of power, And more often than not this leads to pissing contests ’cause one of the parties is not in it for the progress, he or she just keeps shadowing and criticizing other co-founders in order to prove that he or she is in charge. Don’t do that.
  • Time and bandwidth are not candy, stop throwing it away. Do your research. Stop building things you don’t yet understand.
    • This one happens so often that it hurts to watch someone go through the motions. Before you rush to writing an essay, drawing a wireframe or even coding a prototype for an idea, do yourself a favor and spend some time just look things up and talk to people who may know the subject matter better than you (may even be your co-founder!). First-time entrepreneurs, perhaps overlooked the fact that hackathons and startup pitch competitions are for fun and not conducive to product development, are obsessed with making things. We love squandering our design and engineering capacity like we’re filling up a blow-up swimming pool. If you’re going out there and then deciding to direct the company to pivot based on some advice from the first person you meet or the first trending article you read on tech media, you need to stop doing that. Not only are you making dangerously unsound decisions, you’re multiplying the drain on the company’s manpower.
  • The answer to an outdated process is not NO PROCESS
    • We don’t want meetings, we don’t want paperwork, we don’t want hierarchical decision-making, we don’t want to knee-jerk to sales numbers, right? What is the alternative? Nowadays a lot of buzzword we throw around tend to convey freedom and agility, and that often gets misconstrued as unstructured and spontaneous. Being afraid of an outdated process is natural, as these processes can be destructive, and they are reasons why most places and cultures in the world aren’t conducive to entrepreneurship. The question is how you create a better process? No matter you’d like to admit or not, your company is burning capital, you may have under-performing employees or founders, your flat-organization is running meetings with half a dozen people who won’t agree on anything, you may be spending too much on things that don’t create enough value. These eternal problems in the business world don’t go away just because you threw out processes that no longer solve these problems in the 21st century. You have to build business processes at some point. Your company may be fine without one when it’s just two clowns working out of a garage, it certainly won’t be when you have a startup of 10, 20, let alone 100.
  • Work-life balance
    • Pulling an all-nighter once in a while for a deadline, well, it happens. But if you find yourself pulling one every other week, and consistently working 14 hour days without any time to spend with your family and friends? You may want to take that as a red flag. You should always reserve one day of the week to unwind. Go watch a movie, go grab a drink with friends, go paint, go to a museum, go dancing, go read, anything. Burn out happens, but if you are drowning 56 weeks in a year, that is not sustainable.
  • Follow your competition
    • There is a Chinese proverb that I profoundly disagree with, and I think it perfectly characterizes the mentality of many first-time founders, and that is “Rather lead a pack of chickens then to be lead in a herd of cattle”. Utter excrement. Many people would rather stick around in an ecosystem where they’re the next big thing in town, than to go where the best and the meanest duke it out. No competition means no improvement, when you chose to start a company, you’ve chosen to invest your time (and perhaps money) in one of the riskiest assets in the world. In doing a startup, if you want to award yourself for choosing what is comfortable, you’ve made a horrible investment decision. Go where the big fish are, find out just how small you are next to them. Get trampled, stand back up.
  • Stop spending so much time talking bullshit with people
    • Last but not least, the majority of entrepreneurship-related events are talking about things after-the-fact, and too many of them are hosted by people who have never successfully started companies themselves. Networking is useful and a must, but be mindful of how much of the networking you are doing is just talking smack with someone trying to rationalize other people’s successes. The only successes that you should be talking about are your own, so stop spending a bulk of your time going to places where people have no success to share outside of what you could already read off of mainstream media. The only thing you need to know about why other people are successful is that they created value, and it’s time to stop talking about what you will create for people, and start showing them what you can deliver.

That was a Polaroid from my first-ever roller-coaster ride.

Genesis

Hello world, what have you been up to?

I can’t remember the last time I actually blogged, might’ve been in the days when blogging wasn’t a thing – I coded my own weblog site, wrote a few and then subsequently received an invitation to the principal’s office as the school tried to shut down the site. That was high school.

I guess I never really liked blogging – it reminds me of the millions upon millions of lives on this planet the journals of which will be forever forgotten once they stop breathing. Blogs, lacking in conceptual associations or organization, the chronological order that they come in might just be the perfect reminder for the order in which we should forget them. I never liked that realization.

It has been 15 years since I started coding. I taught myself the basics from this book about the internet that my dad purchased back in ’98 but never got around to reading it himself. HTML, Javascript, Java and eventually C/C++, using which earned myself some book money, bought my high school crush a dozen of roses, and they opened myself to a different world of more deliberative and experimental nature – perhaps an escape from the staleness of scholastic rituals with adolescent social incompatibility, there were already too many questions in my mind in midst of the SATs, I had to start somewhere.

Do people really change? I don’t believe so, not anymore. In 15 years, what changed, other than the appearances, are perhaps the relative weight I now lend to psychology as a basis of the perceivable world, as opposed to mathematical transcendental. In midst of all this progression, I experimented with technology (Robotics, AI, Human-computer interaction), some philosophy (mostly empiricism, epistemology and causal logic), some psychology (cognitive and educational), some arts (dancing and music) and in the end I was a wielder of none but the realization of that I’m a lot better at explaining and deriving meaning from intellectualism than to be a substantial contributor to the community myself.

During this time, I started and sold a company, LearnBop, one that took an idea (mass-scale managed crowd-sourcing of intelligent tutoring systems) I entertained 8 years ago and actualized it in the form of a business. Temporarily strayed away from my original aspiration to be an academic, one thing remained certain – I don’t think I ever stopped being a student of a sort. Maybe no one ever stopped.

So why am I starting to blog again? I figured if I’m a student, I ought to keep better notes of what I learn. I don’t see any prospect in publishing, I’ve published a couple of times (a book, couple of papers, magazine articles) and believe me it’s more trouble than it’s worth in every case. Perhaps one day it will be a fun read for a curious mind, as a source of inspiration and not instruction. No path was ever meant to lead, other than the mind of the walker aspiring to a particular end.

With that said, if you see breadcrumbs, it wasn’t that I’m leaving a trail to lead myself home, nope, it’s a trail of crumbs left by a mortal eating outside the comfort of an intellectual home. There never was a home, as the realities you and I perceive, be them different or similar, are holograms of our experiences and dictated beliefs.