Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that traditionally require human intelligence. Artificial intelligence is a very broad field, in which machine learning is a subdomain. Machine learning can be described as a method of designing a sequence of actions to solve a problem, known as algorithms, that optimize automatically through experience and with limited or no human arbitration. These methods can be used to find patterns in large data sets (big data analytics) from increasingly diverse and innovative sources. The following figure provides an overview. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an original essay Since an initial wave of optimism in the 1950s, smaller subsets of artificial intelligence—first machine learning, then deep learning, a subset of machine learning—have created ever larger disruptions. The simplest way to think about their relationship is to visualize them as concentric circles where the idea of artificial intelligence arose first – the largest, then machine learning – which blossomed later, and finally deep learning – which is driving today's explosion of artificial intelligence – which fits into both. Starting with an initial wave of optimism in the 1950s, smaller subsets of artificial intelligence – first machine learning, then deep learning, a subset of machine learning – have created ever larger disruptions. Machine learning, in its most basic form, is the practice of using algorithms to analyze data, learn from it, and then determine or predict something in the world. So instead of manually coding software routines with a specific set of instructions to perform a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task. Machine learning came straight from the minds of early AI enthusiasts, and algorithmic approaches over the years have included decision tree learning, inductive logic programming, clustering, reinforcement learning, among others and Bayesian networks. One of the best application areas for machine learning for many years has been computer vision, although it still required a great deal of manual coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so that the program could identify where an object started and ended; shape detection to determine whether it had eight sides; a classifier to recognize the letters “STOP”. From all these hand-coded classifiers, they would develop algorithms to make sense of the image and “learn” to determine whether it was a stop sign. Good, but not overwhelmingly great. Especially on a foggy day when the sign is not perfectly visible, or a tree obscures part of it. There's a reason computer vision and image sensing didn't come close to rivaling humans until very recently: they were too fragile and too error-prone. Time and the right learning algorithms made the difference. Another early algorithmic approach to machine learning, artificial neural networks, have come and mostly disappeared over the decades. Neural networks are inspired by our understanding of the biology of our brains: all of them.
tags