Imagine two people trying to solve a complex mathematical problem at the same time. One is using his skills in arithmetic and algebra, the other, a computer. If you are interested in the future of that problem, who would you bet on? It&’s not just computers, technologies like ever-smarter phones, interactive TVs, advanced robotics, smart cars, the ubiquitous web and, mobile technology that eliminates the distance between doctor and patient, teacher and student, are quickly defining what it means to be human.

Companies like Microsoft, Facebook, Google, Amazon and IBM have not only announced significant development in Artificial Intelligence but also open-sourced them. The idea is to help AI become less of a mad science research project and more of a building block available to the average programmer.

We are not quite living in the future, but we are getting there. We see applications learning and making decisions and behaving slightly less like software and more like people. Some good examples: Skype translator, Cortana (an intelligent personal assistant created by Microsoft), Clutter—a feature in Outlook that cleans up our email inbox based on past behaviour, and HoloLens, a wearable display that overlays the real world with digitally generated three-dimensional imagery that looks and behaves like it&’s part of reality, something Microsoft CEO Satya Nadella calls AR or Augmented Reality. And the director of engineering at Google, Ray Kurzweil, predicts that by 2045 we will arrive at “singularity”—a term he popularised for the point at which humans and computers will merge as one where human intelligence will be enhanced a billion-fold. In his latest book How to Create a Mind (2012),

Kurzweil, described by Forbes as “the ultimate thinking machine”, advocates building a synthetic extension of the brain connecting it to the cloud. These are powerful ideas. But here&’s the big question: Is technology making us more efficient and intelligent? Let&’s take a look at the Oji-Cree, a people numbering about thirty thousand, who live in extremely cold and desolate areas in Ontario and Manitoba in Canada. For much of the 20th century, they lived a rugged, rigourous life with plenty of exercise, untouched by what we call the technological revolution.

Life was simple – mental illness or substance abuse was unheard of. It was only in the 1960s that electricity and the internal combustion engine began to reach the area. The plain living, nomadic people embraced the new tools, advancing through hundreds of years of technological evolution in just a few decades. Life became a lot more comfortable. The hard labour of canoeing or snowshoeing was replaced by the comfort of outboards and snowmobiles. They no longer starve during the winter thanks to refrigerators and even enjoy pleasures like sweets, alcohol, and television. But since the advent of new technologies, there has been a massive increase in obesity, heart disease, and Type 2 diabetes. Alcoholism, drug addiction, and suicide rate have reached some of the highest levels on earth as revealed in a study titled Genetics, Environment and Type 2 Diabetes in the Oji-Cree Population of Northern Ontario.

“The Oji-Cree are literally being killed by technological advances,” Columbia Law professor Tim Wu writes in his article As Technology Gets Better, Will Society Get Worse? in The New Yorker (February 6, 2014). The story offers an important lesson. A society needs time to adjust to new technologies without destroying its cultural continuity. But the trouble with technological evolution is that it is driven by what we are led to think we want as opposed to what is adaptive. In a market economy our identities are defined by what companies decide to sell us based on what they believe, we, as consumers, will pay for.

The role of ICT in the economy has firmly been established in our country. But has there been any study on how new technologies have impacted our mental health? The tech industry which seems to be aiming at minimising pain and maximising pleasure has failed us on its own terms. It has an obligation to do greater good by catering to our more complete selves rather than just our narrow interests that eliminate spaces for things like, thought, reflection and leisure.

Take, for example, the sweet promise of liberation from overwork which was one of the central premises of automation. But we have become plagued by a tyranny of small tasks, individually simple but collectively oppressive. And, when every task has been made easy by technology, there remains just one profession left: multitasking. We have evolved into creatures whose lives are more productive but less satisfying. Technology is supposed to help us focus on what matters. We all know that it&’s easier to drive to the top of a mountain than to hike.

The view may be the same but the feeling never is. By no means do I insist that everything need be done the hard way, or that we need to suffer like our forefathers to achieve redemption. There is nothing wrong with using Facebook to communicate with someone as long as we do not sacrifice real conversation for mere connection. Social fragmentation leads to social breakdown. Close communities have less crime, fewer problems. It is not technology but our collective demands that drive our destiny and define the human condition.