What Is A.I., Really?

Artificial intelligence can mean many things. Artificial “Narrow” Intelligence includes both machine learning and generative A.I. Lately, we’ve been hearing a lot about generative artificial intelligence, which includes large language models such as ChatGPT. Large language models are a type of generative A.I. that has been specifically designed to generate text-based content. It uses deep learning techniques and massively large datasets to understand, summarize, generate, and predict new content. Just one year ago, large language models processed an impressive 75,000 words per minute. Now, they are even faster and more accurate.

Equal to and Beyond Human Intelligence

Beyond the basics, A.I. can take on specific jobs, including as virtual assistants, driving cars, language translation and processing, and image, speech, and facial recognition.

More specifically, certain types of artificial intelligence are defined in comparison to human intelligence. The most human-like, Artificial General Intelligence (AGI) can perform as well or better than humans on a wide range of cognitive tasks. Smarter than us across the
board, Artificial Superintelligence (ASI), as defined by IBM, is “a hypothetical software-based artificial intelligence system with an intellectual scope beyond human intelligence. At the most fundamental level, this superintelligent A.I. has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human.”

Trillions of Dollars in Improved Productivity

A.I.’s impact on productivity will be tremendous. Generative A.I.’s impact on productivity could add $2.6-$4.4 trillion to the global economy. About 75% of that value will spread across six areas — customer operations ($404 billion), marketing ($463 billion), sales ($486 billion), software engineering for I.T. ($485 billion), software engineering for product development ($414 billion), and product R&D ($328 billion).

Because of A.I.’s rapid progress, “What Is A.I., Really?” addresses concerns of ethics and responsibility. Bias can result when conclusions are based on datasets that are not comprehensive or accurate. Legal risks can be anything from plagiarism to copyright infringement. Privacy and consent issues might arise when using information without permission to make conclusions. Mistakes, also known as A.I. hallucinations, can result from insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.

All of these tools are there to augment experience, perspective, knowledge, and so much more, as long as we use it correctly.

Author

  • Masoud Nafey, OD, MBA, FAAO

    Dr. Masoud Nafey, OD, MBA, FAAO, is a 3x A.I. Tech Founder, a Senior Consultant to a Global Wealth Fund, and holds several board positions in innovative tech companies. Dr. Nafey helped build the Stanford University Vision Performance Center at the Human-Centered A.I. Institute. He was the Founder of Vizzario, Monokül, and MENT — deep tech, A.I., and Web3 companies focused on human-computer interfaces and network intelligence. Prior to that, he served in executive roles in technology verticals within VSP Global and EssilorLuxottica, building EHRs, telehealth solutions, and medical device image management solutions. He has a proven track record in innovating, productizing, commercializing, and scaling tech businesses.



    View all posts


Leave a Reply

Your email address will not be published. Required fields are marked *