The role of Artificial Intelligence in future technology
Computers and information systems are a huge part of people’s lives in the modern world. Transition from analog electronics into digital electronics and invention of the internet made information widely accessible, and connected people all around the globe. This created an excessive wealth and welfare like never seen before in human history. Many tasks are now automated, and robots replaced humans in many jobs. Until recently, computer programs were nothing but a set of instructions that run very fast; these programs were only capable of following explicit directions from humans, and confined to the rules specified beforehand. However, with the emergence of data-driven systems, or machine learning, computers are able to make predictions, assumptions and decisions; they can extract a huge value from data without being explicitly programmed. Just like manual labor was replaced by machines and algorithms with the industrial and digital revolution, mental labor will be replaced by machine learning through data revolution, a transition from rule-based systems to data-driven systems.
The field of Artificial Intelligence started as a number of rule-based algorithms. Research in the field of Artificial Intelligence as well as the applications have started to shift from rule-based algorithms to data-driven ones. Literature suggests that overall better results could be obtained using data-driven approaches in many areas like computer-aided-diagnosis (van Ginneken, 2017) and vision-based robotics (Kalashnikov et al., 2018). During this transition, increasing computational power and access to large amounts of data played a big role. Alongside the extensive academic research and governmental fundings (National Research Council, 1999), open-source software and tools created by technology companies contributed to the progress of the domain immensely. Libraries like TensorFlow by Google (Abadi et al., 2016) and PyTorch by Facebook (Paszke et al., 2019) catalyzed the development of scalable machine learning applications in academia as well as in the industry. Despite the aforementioned developments in the field and evolution of AI, machine learning, or statistical learning, has not reached its peak and unlocked its true potential yet. Today, we are observing a spike in the amount of funding and time dedicated towards research in data-driven systems. “In 2019, global private AI investment was over $70B” (Perrault et al., 2019, p. 6). “Between 1998 and 2018, the volume of peer-reviewed AI papers has grown by more than 300%” (Perrault et al., 2019, p. 5). As these systems improve, we will be able to benefit from the many positive attributes of data-driven approaches.
The most important aspect of data-driven systems is its ability to infer knowledge based on learned patterns like the human brain does. They both depend on data, act upon experience, and incorporate them into their decision process. Neural networks, one of the most effective data-driven models, mimic the neurons of the human brain, and correlate incoming and outgoing signals. This way of learning enables systems to be more adaptive, and makes the learning phase continuous and dynamic without being held by prespecified rules, creating a huge advantage over rule-based systems. This similarity to the human brain makes data-driven systems a wonderful alternative in tasks that require experience and expertise. Even medical doctors like radiologists that do tomography analysis work, which requires high precision, attention and experience at the same time can be replaced by deep learning models. The workforce that is replaced can be transferred to areas where more labor is needed like the medical and caregiving industry. According to The Association of American Medical Colleges the projected shortage of total physicians in 2030 is between 40,800 and 104,700 (Markit, 2017). Another profession that could vanish in the future is foreign language teachers. According to American Council on the Teaching of Foreign Languages (2011), in the 2007-2008 education year, there were 8.9 million K-12 public school students who are enrolled in foreign language courses. Considering college students and professionals, there are, in total, millions of people around the world who need foreign language teachers to acquire foreign language skills. With the emergence of natural language processing, a subfield of Artificial Intelligence that is concerned with processing and analyzing language, people will be able to communicate without learning foreign languages. It is thus possible that products and services towards foreign language learning will diminish in importance and size. As a consequence, it is also expected that translators and interpreters will lose their jobs due to the development of instant in-ear translation systems.
Today, deep learning models can successfully determine geological characteristics of earth using large remote sensing datasets such as BigEarthNet with an accuracy about 70% (Sumbul et al., 2019). They easily beat the masters of the most complex games on earth such as chess (Silver et al., 2017) and GO (Campbell et al., 2002) thanks to the huge amounts of data, related to possible scenarios of the game, they have absorbed. However, deep learning models can still be improved with more training data. Researchers argue that performance and accuracy of data-driven models will increase with larger data sets, and models will be able to find unknown useful patterns that have not been founded so far (Gheisari et al., 2017), since considerably small data sets fail to cover edge cases and complex scenarios. In the future, with increasing computation power and larger data sets, data-driven systems will be able to give more consistent and successful results.
Most applications and digital services like Microsoft’s Azure, a cloud computing service, and Google’s web browser collect user data in order to enhance the accuracy of their deep learning models. Humans supply data in exchange for the convenience and benefits that these services provide for them. Hospitals share tomography results to be used in cancer research when detecting and tackling cancerous tissues (Moss et al., 1989). This is an inevitable cooperative interaction called man-computer symbiosis (Licklider, 1960). As the diversity in our data increases with growing data sets, machine learning models will become more successful. Artificial Intelligence will be able to assist or replace humans in arduous tasks.
The emergence of artificial intelligence applications has an immense effect on human life. Computers can learn just like humans do, and make decisions accordingly. Upon the revelation of future possibilities of AI through extensive research and heavy fundings of the domain, it became obvious that data-driven systems will assist and replace the mental worker’s of today’s world, just like the physical labor was replaced by machines and algorithms of industrial and digital revolution. As the data that is provided to machine learning models increase, they will perform better and be able to give more accurate results which will be a defining factor in the future technology.
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., … & Ghemawat, S. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.
Campbell, M., Hoane Jr, A. J., & Hsu, F. H. (2002). Deep blue. Artificial intelligence, 134(1-2), 57-83.
Foreign Language Enrollments in K–12 Public Schools (PDF). American Council on the Teaching of Foreign Languages (ACTFL). February 2011. Retrieved October 17, 2015.
Gheisari, M., Wang, G., & Bhuiyan, M. Z. A. (2017, July). A survey on deep learning in big data. In 2017 IEEE international conference on computational science and engineering (CSE) and IEEE international conference on embedded and ubiquitous computing (EUC) (Vol. 2, pp. 173-180). IEEE.
Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., … & Levine, S. (2018). Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293.
Licklider, J. C. (1960). Man-computer symbiosis. IRE transactions on human factors in electronics, (1), 4-11.
Markit, I. H. S. (2017). The complexities of physician supply and demand: projections from 2015 to 2030.
Moss, R. H., Stoecker, W. V., Lin, S. J., Muruganandhan, S., Chu, K. F., Poneleit, K. M., & Mitchell, C. D. (1989). Skin cancer recognition by computer vision. Computerized medical imaging and graphics, 13(1), 31-36.
National Research Council. (1999). Funding a revolution: Government support for computing research. National Academies Press.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., … & Desmaison, A. (2019). PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (pp. 8024-8035).
Perrault, R., Shoham, Y., Brynjolfsson, E., Clark, J., Etchemendy, J., Grosz, B., … & Niebles, S. M. J. C. (2019). The AI Index 2019 Annual Report. AI Index Steering Committee, Human-Centered AI Institute.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., … & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354-359.
van Ginneken, B. (2017). Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning. Radiological physics and technology, 10(1), 23-32.
Sumbul, G., Charfuelan, M., Demir, B., & Markl, V. (2019, July). Bigearthnet: A large-scale benchmark archive for remote sensing image understanding. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium (pp. 5901-5904). IEEE.