AI Roundup March 1, 2024
Klarna crunches the numbers on AI customer service, Google CEO comments on image generation "blunder", the Pentagon uses AI to target the baddies.
Klarna uses ancient tech to implement AI
The other day I read that European financial services company Klarna has replaced 700 employees via AI chatbots. This statement got people pretty upset because apparently they had just laid off a bunch of people less than a year ago. I’m not really sure why, as companies are not charities, they are profit-generating machines that answer to shareholders before employees. But this is not the point I am trying to make.
Now whether or not this statement is hyperbole or not is anyone’s guess, but I think all that really happened was someone in accounting took out a calculator and asked the question: “How many bad refunds will I need to give out before I lose the amount of my entire (or a significant portion of) customer support team? And they saw this was a big number and said “Heck! Authorize this machine to just give out $15 refunds to anyone who asks and if they want more money send them to a real person! Oh, also please give me 10% of the saved salaries (see above about companies not being charities)”. And remember, it isn’t just salaries, it is benefits as well as recruitment costs to keep everything staffed, and training time, etc., etc.
Look for this coming to a call center near you.
Google and all of AIs: Six or more fingers on a hand OK, wrong-skin-colored George Washington not OK
Google CEO Sundar Pinchai got involved with the Gemini Image production blunder saying such errors were “unacceptable”. If you couldn’t tell from my earlier posts, for some reason the “errors” it was making are much worse than humans having too many fingers. I don’t understand why.
I’m not even clear on how such a thing is fixable, and why this is a much bigger issue than other things AI produces. Also, don’t forget that running these models anew takes a tremendous amount of energy, is it worth it? AI is exactly that. It is artificial. It is not real. It may never be real. Sometimes it may be very real. Please use common sense and do your best to determine what is real and not real.
The Pentagon uses AI to find your friendly neighborhood Jihadist
It was recently revealed that the Pentagon has been using old Google tech they bought to help identify targets in the simmering Middle East region. This also seems to be something that got a few people upset. The Pentagon was very clear that nothing happens automatically and always underwent human review before a target was acted upon. Why can’t we use whatever tool we can get our hands on to destroy the very people who let you read this blog and let me write it?
Obviously it needs oversight, which it seems it is getting, but think of all the things the computer does better than you do. Maybe this tool will pick up a site that was more important than all the others but was hidden way better. Maybe the tool can analyze traffic patterns more quickly and thoroughly than a human can, perhaps revealing hiding spots for terrorists. After all, this is how Osama bin Laden was found and I think we can all agree that the world is a better place without him.
Focus your energies on properly regulating, rather than trying to shut down something that can save me, you, and your loved ones.