6 Comments
User's avatar
Andy in TX's avatar

On the other hand.... I recently listened to a podcast by Lex Fridman - an interview with Indian PM Modi. Modi spoke in Hindi in the interview itself, but the audio had him speaking in English. The translator was a woman, but the voice was an excellent AI-generated version of Modi's voice. (I asked an Indian student to listen to some of it and she said that it was a perfect imitation of Modi's speaking style, mannerisms, tone, etc.) So there's a case of AI enhancing an audio product by pairing with the human translator. Much as Gary Kasparov has written about chess, it turns out AI + human (in chess, chess program + human) can be better than either AI (chess program) or human alone. And Google's Notebook LM is terrific at generating podcast-like audio summaries of material (my students find it really helpful in reviewing material). So overreacting (which I think you are) is likely to throw the baby out with the bathwater.

Expand full comment
Claire C. Holland's avatar

And you're okay with the fact that those programs steal the work of real voice actors without compensation in order to create those AI-generated voices?

Expand full comment
Andy in TX's avatar

I’m good with competitive markets in which new technologies expand consumer choice. It isn’t “stealing” a job if the job can be done better or cheaper by alternative means. Presumably you opposed the introduction of computers (word processing put lots of secretaries who knew shorthand out of work), cars (no more buggies), airplanes, etc. AI is only different in that it affects more high status jobs (or, so it is claimed it will). High status people are more likely to get protectionist legislation passed than working class people and much more likely to complain in ways that get noticed. No one held a gun to anyone’s head to get me the superior experience of a more Modi-like interview with LF. And I doubt there are many unemployed English language Modi imitators around. Plus - of course - a human did the initial translation and the AI merely altered the tone and sound.

Expand full comment
Claire C. Holland's avatar

I think you're misunderstanding the issue of how generative AI steals - literally and wholesale - the voices of voice actors in order to create new voices (amalgams made from combining REAL, existing voices) that sound like, for example, Modi. I'm not just saying AI is stealing these people's jobs - it is LITERALLY stealing their voices in order to run at all. Your comparisons to computers, etc. do not apply for this very reason. And yes, they actually do hire voice actors to imitate people, it's called voice matching and is a real job.

But according to you, your "superior experience" of hearing an interview is more important than preserving copyright law. If that is true, we have certainly reached an impasse.

Anyway, most of this information is in the article I wrote, so you can understand why this will be my last response to you.

Expand full comment
Andy in TX's avatar

Well, I disagree - I think you are profoundly mistaken about both how a human learns (by listening to and imitating others) and how LLMs learn which is not different in kind). So we will have to agree to disagree. Say hi to your dad. A coauthor of mine.

Expand full comment
Dr. James Ottavio Castagnera's avatar

Hi, Andy. I'd like to come at this from a somewhat different angle... one that I will probably take up in greater detail when I do my next Substack piece in a couple of weeks. From the job-displacement perspective, you're correct that every new technology eliminates some jobs and creates others. I have been led to understand that graphic artists, for example, are finding they must retrain for new professions, because GenAI is replacing them with increasing frequency. This is not unlike what happened to typesetters and longshoremen in the not-too-distant past. I think that's the point you are making. And, yes, you and Claire seem to be at an impasse and may have to agree to disagree. Personally, I think ethics are on her side, while the history of technology is on yours. And, the latter almost always has trumped the former. My fear is that this time ---as compared to the examples you cite --- the technology will be far more disruptive than anything humanity has experienced heretofore, the Industrial Revolution and the computer/internet revolution notwithstanding.

My concerns extend even beyond that possibility. Every once in awhile a new technology comes along that is qualitatively different in an existential way. The thermonuclear bomb is such a technology. The technology of warfare progressed incrementally up to and through WW II. As horrific as strategic bombing, mechanized land warfare, and naval warfare ala battleships, aircraft carriers and submarines became, armies, navies, air corps and even whole populations adapted.

Then came the atomic bomb, and soon thereafter, the "Super." This marked what I (and most other folks, I presume) consider to be an existential advance. For the first time in all history, It became possible to annihilate the entire human race. We entered the age of Mutually Assured Destruction. For 80 years there has been no WW III. This, I suppose, is a good thing. No leader has been willing to risk a thermonuclear exchange.

If you believe, as do I, that the Singularity is closer than many might think, and if by that we mean the emergence of AI that is of superior intelligence to its creators, then will it pose an existential threat? We may think we can control it. But if an AI entity is smarter than its human creators, if it is self-aware, and if it can learn and evolve independently and at warp speed, are we so sure it won't turn ---like Frankenstein's monster ---on its creator?

Personally, I rank this threat higher on my urgency scale than global warming, future pandemics, or thermonuclear war for that matter. As Arthur C. Clarke observed, "Any sufficiently advanced technology is indistinguishable from magic." Are we playing with white magic or black magic here? And when the genie is out of the bottle ---sooner or later--- will we retain control of it? At least PEOPLE control nuclear weapons... as erratic as humans can be, at least we all seem to share the same survival instinct. Now, we are creating a new species... one that will surpass us. What will it do when we forfeit control? Will it be benevolent... or will it treat us the way our species treated the Neanderthals?

Expand full comment