3 Big Generative AI Problems Yet To Be Addressed

  • By
  • Published

3 Big Generative AI Problems Yet To Be Addressed

Adopting generative AI into technology is potentially more significant than when the internet was introduced. It is disruptive to most creative endeavors and not as capable as it will be by the end of the decade.

Gen AI will force us to rethink how we communicate, how we collaborate, how we create, how we solve problems, how we govern, and even how and whether we travel – and that’s far from an exhaustive list. Of course we all hope that when these technologies reach maturity, the list of things that don’t change will be far less than the list of things that do.

Data Center Loading

Regardless of all the hype, few people are using generative AI yet, let alone using it to its full potential. The technology is processor- and data-intensive while it is very personally focused, so having it reside only in the cloud will not be feasible, mainly because the size, cost, and resulting latency would be unsustainable.

Much like we have done with other data and performance-focused applications, the best approach will likely be a hybrid where the processing power is kept close to the user. Still, the massive data, which will need aggressive updating, will need to be more centrally loaded and accessed to protect the limited storage capacities of the client devices, smartphones, and PCs.

But, because we are talking about an increasingly intelligent system that will, at times — like when it is used for gaming, translation, or conversations — require very low latency. How the load is divided without damaging the performance will likely determine whether a particular implementation is successful.

Achieving low latency won’t be easy because while wireless technology has improved, it can still be unreliable due to weather, placement of the towers or user, maintenance outages, manmade or natural disasters, and less than complete global coverage. The AI must work both online and offline while limiting data traffic and avoiding catastrophic outages.

Even if we could centralize all of this, the cost would be excessive, though we do have underused performance in our personal devices that could mitigate much of that expense. Qualcomm is one of the first firms to flag this as a problem and is putting a lot of effort into fixing it. Still, expect it will be too little and too late, given how fast generative AI is advancing and how relatively slowly technology like this is developed and brought to market.

Security

If one can get enough data, they can estimate data that they don’t have access to more accurately. For example, if you know the average number of cars in a company parking lot, you can, with reasonable accuracy, estimate the number of employees the company has. You can usually scan social media and find out the interests of the company’s key employees, and you can look at job postings to determine what kind of future products the company is likely to develop.

These large language models collect massive amounts of data, and I expect many of the things these LLMs scan in are or should be confidential. In addition, if enough information is collected, the gaps resulting from what’s not scanned in will be increasingly derivable.

This scenario does not apply only to corporate information. With the kind of personal information that is readily available, we’ll also be able to determine much more about the private lives of users.

Employers will be able to locate whistleblowers, disgruntled or disloyal employees, bad employee behavior, and employees who are taking advantage of the firm illicitly with greater accuracy. Protecting against a hostile entity deriving confidential information about you, your company, or even your government is becoming more viable with far greater accuracy than I enjoyed as either an auditor or competitive analyst.

The best defense is likely to create enough disinformation so that the tools don’t know what is real and what isn’t. However, this path will also make the connected AI systems far less reliable overall, which would be fine if only the competitor used those systems. However, it is likely to compromise the systems of the company that wants protection might use, resulting in a growing number of bad decisions.

Interpersonal Relationships

Companies like Mindverse with its MindOS and Suki with its employee supplementing avatars are showcasing the future personal use of generative AI as a tool that can present as if it is you. As we progressively use tools like this, our ability to determine what is real and what is digital will be reduced significantly, and our opinions of the people that use these tools will reflect more on the tool than on the person.

Imagine having your digital twin do a virtual interview, be the face of your presence on a dating app or take over for much of your daily virtual interactions. The tool will try to be responsive to the person interacting with it, it will never get tired or grumpy, and it will be trained to present you in the best possible light. However, as it advances down this path, it will be less and less like who you really are — and likely become far more interesting, attractive, and more even-tempered than you could ever be.

This will cause problems because, much like actors who date someone who has fallen for a character the actor once played, the reality will create subsequent breakups and a loss of trust.

The easiest fix would be to learn either to behave like your avatar or to use them for interactions with friends and co-workers. I doubt we’ll do either, but these are the two most viable approaches to mitigating this coming problem.

Wrapping Up

Generative AI is amazing and will significantly improve performance as it ramps into the market and users reach critical mass. Yet there are significant problems that will need to be addressed, including excessive data center loading, which should drive hybrid solutions in the future, the inability to prevent deriving secrets from these enormous language models, and a considerable reduction in interpersonal trust.

Understanding these coming risks should help avoid them. However, the fixes aren’t great, suggesting that we’ll likely regret some of the unintended consequences of using this technology.

Source : https://www.technewsworld.com/story/3-big-generative-ai-problems-yet-to-be-addressed-178213.html

Recents Post

Fiber coupler

First-order surface grating fiber couplers are devices in optical communication...

Read More

JEC Residence C5, Plumbon, Banguntapan, Modalan, Banguntapan, Kec. Banguntapan, Bantul, Daerah Istimewa Yogyakarta 55198.

info@iaesjournal.com

(+62274) 2805750

Menu

About Us

Membership & Services

IAES Journal

Conferences

Support

Help & F.A.Q

Terms & Conditions

Privacy Policy

Contact