There are two immediate thoughts after this week’s whirlwind announcements and articles about artificial intelligence and generative ai.
(1) Debating whether an algorithm needs rights over and above the prevailing inhabitants of this planet first is just Techbro bullshit. We proceed to kill sentient species for fun and deny human rights to those that need them but please do argue your point why PowerPoint with ChatGPT must be protected.
“AIs aren’t any longer just tools. They’re quickly becoming digital minds integrated in society as friends and coworkers.” — no, they’re tools and you might be an excellent greater one for pondering otherwise. Confusing consciousness and sentience with artificial intelligence and even AGI which is allegedly inside a 5-year time period now could be only a nonsense distraction we don’t need at once.
This leads me to my second point which is way more essential to debate.
(2) Consider that ChatGPT and a future GPT iteration go rogue or develop AGI and OpenAI has no option but to throw the switch out of a misguided idea over they now not have control.
When Facebook or Google goes down it kills OAuth and the flexibility to log into web sites, passwords, and productivity suites. When OpenAI goes down, integrated into every part because we rushed so quickly to integrate into their platform to have a competitive advantage driven by FOMO, it wipes out the flexibility to work entirely, especially if developers construct ChatGPT interfaces because the essential UX.
In a generation or two people will forget that we ever used filters, buttons, switches, drop-downs, and popup windows to be productive with. That little chat window going offline will wreak havoc and users won’t know what to do. It’s just like the video of the kid attempting to swipe on a newspaper and wondering why it’s not a touchscreen.
I’m sure you’ll be able to deploy whatever instances of GPT wherever you wish to but there can be a overwhelming majority who will take the lazy route and go on to OpenAI’s service and depend on it.
That to me is a far greater danger and issue than conferring rights to a big language model or a robot rebellion.
It’s our pattern of reliance on the sake of convenience that society could pay the value for and AI at once is the head of that convenience.
John Meyer has done what I believe plenty of us actually need to begin seriously questioning — using ChatGPT and generative ai to bring a dead person — whether a celeb or loved one — back to life.
He trained an AI on Steve Jobs’ voice, then connected it to the #GPT4 API before finally connecting all of it to Facebook Messenger to permit for 2-way voice conversations with Steve Jobs about anything.
Whilst this can be a novelty there are serious ethical questions coming thick and fast for individuals who want to supply some type of Black Mirror service that permits users to consult with their departed family members which have been trained on their very own life experiences and data held across various social media sites, emails, photos, videos and more.
, invading the grieving process but to those who open this door and develop these items, there’s numerous money to make that may ease their conscience little doubt.
Filing this under ‘Do Not Want’.
In 2020 a paper was written titled “Cognitive Warfare” wherein information and its manipulation to affect decision-making at critical times is a latest battlefront.
“With the increasing role of technology and knowledge overload, individual cognitive abilities will now not be sufficient to make sure an informed and timely decision-making, resulting in the brand new concept of Cognitive Warfare, which has change into a recurring term in military terminology in recent times.”
There’s plenty of comparison to this, China, and Russia from the Military but these are potentially lesser because as of last week we’ve truly entered a phase where generative ai could be far more practical a weapon than social networks alone.
“The exploitation of human cognition has change into an enormous industry. And it is predicted that emerging artificial intelligence (AI) tools will soon provide propagandists radically enhanced capabilities to govern human minds and alter human behaviour.”
The paper makes mention of artificial intelligence but only in a generic sense and it’s going to want updating again in light of recent announcements.
If generating pictures of Trump being arrested is fun now the sophistication of where this could go in the subsequent 12 months should worry you.
But what should scare you more, if the congressional hearings with TikTok are anything to go by, is that the low intellect of those in charge to understand technology is precisely what someone with harmful intent can be banking on.
The thought of democratizing artificial intelligence died pretty quickly this week because of the discharge of OpenAI’s API into GPT4. Why?
Because they are actually becoming the singular and largest AI company on the planet in a matter of weeks if not days consequently of the fever pitch hype and FOMO from organizations, apps and vendors who should be an element of it.
Everyone who wants a bit of this can be directing their teams to develop an integration because in the event that they don’t then a competitor will.
Are you able to imagine the quantity of knowledge they’ll now be party to for extra training? I’m sure the terms and conditions will little doubt waffle some handwavy bull about data privacy but everyone knows that’s not how the world works anymore. People questioned Palantir and their motives just a few years ago but frankly, they’re a bunch of tardy children in comparison with this issue now.
The difference and speed in the fee of developing this vs the fee and time of developing and training your personal large language model (that invariably won’t multi-modal) can be massive and OpenAI knows this.
You’ll need to go to the dominant player since it’s proven and the speed of iterative releases and enhancements are already apparent so that you’ll need to take full advantage.
We reside in interesting times but additionally dangerous ones. The people we’d like will not be a part of the conversation. The people who find themselves involved aren’t informed, and spend all their time in committees, writing whitepapers, and in think tank group discussions all while the technology is racing ahead with none meaningful regulation or protection.
Before you get excited and implement ChatGPT into your retail customer experience consider just how clunky having to talk your way through a web site goes to be.
This shouldn’t be good customer experience, it takes 10x longer than browsing an app or website with easy filters. Please think before you fall for the hype and destroy your brand repute because your boss told you to be “progressive”.
LLMs have a spot in first and second-line customer support processes, but this definitely isn’t it. I’m more focused on how artificial intelligence like this can be implemented in handling customer queries, resolving issues by utilizing natural language and even via speech or text (or each). Multi-modal input where a picture of a broken or damaged item could be assessed immediately together with the request. These are way more impactful than asking a bot to search out a bit of clothing after which having to repeatedly answer more questions.
Start planning for disruption in customer support but for god’s sake don’t forget the experience.