Skenesher | Electronics+ | Getty Images

Will generative AI applications like ChatGPT make us more efficient, save time, and help us become healthier, smarter, and happier? The current answer is “maybe”.

Generative AI is real in revolutionizing the nature of work, culture, and creativity. It’s transformative for many industries and has the potential to become as ubiquitous in our homes as Siri and Alexa.

Star Wars director George Lucas sees it clearly. If you ask me, I predict our kids will be opening a Christmas present this year as talking robots, whether it’s a tiny R2D2 or a sleek gold 3PO powered by generative artificial intelligence, will be placed under the Christmas tree.

Not only as a tech executive, but as a parent, I am a proponent of generative AI. The thought of my kids playing with artificial intelligence doesn’t scare me. I prefer them interacting with AI – indexing trusted information – than learning science, healthcare and life hacks on TikTok. Likewise, I prefer my kids to challenge their minds with video games like The Legend of Zelda rather than watching mindless TV.

Despite its popularity, generative AI is still in its infancy.

In order for AI to usher in a new boom, two things need to happen — it will benefit our children, as well as technological innovation, education and investment:

1. Generative AI requires training on trusted information

The large-scale language model (LLM) learning behind generative AI applications generates dialogue by scraping the web’s massive data sources and predicting the next meaningful word. It reminds us of Google’s “autocomplete” feature when searching, or Google’s “did you mean to say” — just on steroids.

If you spend some time with generative artificial intelligence, you will find that it is already quite good. perfect? No, but it can definitely talk to you about many different things, and it looks smart.

But because generative AI is trained using fallible humans, it also spreads misinformation in these channels. Some of these bad messages can be humorous, like asking Google’s chatbot Bud “did Anakin Skywalker fight Darth Vader?” and getting “yes, they fought three”. (Which is funny because they’re the same person.)

Or it could be detrimental, like asking an AI “is sunscreen good for you” and getting “maybe” because it was trained on input that came out after the epidemic TikTok Disinformation Campaign.

This is where publishers come in, and how they can play an important role in an AI-driven future. By using reliable information and high-quality media to train these intelligent systems, generative AI reflects a better nature of what the world has to offer.

News publishers have checks and balances in place to report news accurately. News editors dedicate their entire careers and lives to this. I trust journalists’ assessments of breaking news more than TikTok influencers’ hot words. Yes, I said that.

2. Generative AI requires attribution and compensation for its origins

A fundamental problem with the business models of generative AI companies is how their sources of information are indexed, how those sources are recognized for their contributions, and how they ultimately get paid.

Generative AI companies need to standardize what to index, how often, and how that translates into disclosed answers. This needs to be more transparent – not just listing the source as a bullet point below the answer.

Generative AI companies need to take a stand — will they pay for the data feeds they ingest every day? At a time when misinformation is rampant on social networks, news publishers who generate artificial intelligence to provide the right answers are providing a critical service. The news media should pay, but the question is how?

Adam Singolda is the CEO of Taboola, a contextual online advertising company.


Leave a Reply

Your email address will not be published. Required fields are marked *