17 comments

  • a2128 1 hour ago
    AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today.

    Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.

    • roflmaostc 1 hour ago
      Partially agree. However, this problem has existed with scam e-mails since the 90s.

      For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.

      Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.

      • mk89 4 minutes ago
        There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.
      • TheOtherHobbes 50 minutes ago
        How do you prove the signature isn't fake?

        Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.

        All of those have their issues.

        • olmo23 28 minutes ago
          I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.
        • tenacious_tuna 29 minutes ago
          people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.
          • bigfishrunning 24 minutes ago
            If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.
      • Forgeties79 1 hour ago
        Spam emails in the 90’s don’t come remotely close to the operations people can set up by themselves with AI now.
    • chistev 26 minutes ago
      We are still in the early stage of AI and already I struggle to tell what is real or fake on my Twitter feed. It will only get better in its deception with time.

      You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt.

      People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists.

    • nslsm 43 minutes ago
      If anything deepfakes will be good for the economy because if you can’t do business with people who are far away it becomes harder to outsource.
      • bitmasher9 39 minutes ago
        In general barriers to trust/trade are bad for tbr economy.
    • Forgeties79 1 hour ago
      > footage of some incident somewhere may have been entirely fabricated by AI,

      Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”

      • bigfishrunning 19 minutes ago
        Either way, the lack of trust is the damage.
    • whateverboat 1 hour ago
      What's the solution apart from an identity providing service?
      • a2128 1 hour ago
        I don't know of a solution. I don't think even identity verification will meaningfully solve this. People will get hacked, or provide their SEO-spamming agent with their own identity, or purposefully post fake videos under their own identity. As it becomes more normal to scan your ID to access random websites, it will also become easier to steal people's identities and the value of identity verification will go down.
        • intrasight 52 minutes ago
          People don't get hacked - devices get hacked. So all we need is a better chain of trust between two people. This is not a technology development problem as much as a technology implementation problem. And a political problem
          • bigfishrunning 17 minutes ago
            People get hacked -- a device could be flawless, but if a person is a victim of "Social Engineering" and hands the attacker a password, there's nothing the designer of the device could do about it.
        • nathanaldensr 1 hour ago
          Agreed. The sphere of trust around each of us will shrink back to only those in our physical proximity. Outside of that, no one can be trusted.
      • Gigachad 1 hour ago
        I’m seeing a huge increase in companies requiring in person interviews now. Seems there is a real possibility the internet as we know it will be destroyed.
        • dominotw 1 hour ago
          linkedin is completely destroyed now. There are tons of ai bots there but real humans are now fronts for AI. So you cant even trust content from from ppl you know.

          identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin.

        • rkomorn 1 hour ago
          I think you might be right and I think I'll like some of the consequences and hate some of the others.

          More in-person stuff feels like a win to me (and I say this as someone who probably counts as introverted).

          Not being able to trust any online interactions anymore? Seems like a new height in what was already a negative.

      • adithyassekhar 1 hour ago
        That's just shifting the problem not solving it.
    • thunky 48 minutes ago
      > damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person

      What damage are you talking about?

      I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing.

      • bigfishrunning 20 minutes ago
        Your wife or mother calls you or video calls you and says to meet her somewhere, or to send money, or to pick up groceries or whatever. Does it not matter that it wasn't her? Could it be someone trying to manipulate you into going somewhere, to be robbed or whatever? At any rate, you'll need to verify that information came from the source you trust before you act on it, and that verification has a cost.

        The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage.

        • thunky 6 minutes ago
          Not disagreeing, but the context of GP was business/economy/hiring.

          Also it was already possible for someone to impersonate your mother via text or similar, and even easier to pull off.

      • rdevilla 41 minutes ago
        Because what you are actually doing is exchanging symbols, tokens, if you will, that may be redeemed in a future meatspace rendezvous for a good or service (e.g. a job, a parcel). These tokens are handshakes, contracts, video calls, etc. to be exchanged for the actual things merely represented therein.

        Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol.

        There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol.

        A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete.

      • skydhash 42 minutes ago
        > What damage are you talking about?

        Not GP, but there's a lot of damage that can be done with impersonation.

      • chii 42 minutes ago
        The grandparent post has the belief that human interaction is intrinsically better. Not sure i agree, but i can understand the POV.

        However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho.

      • esseph 20 minutes ago
        Imagine how this plays out in courtrooms the world over for evidence.

        We're in deep shit.

  • octopoc 50 minutes ago
    Just say something that would violate AI safety. Then you can be sure they’re a real human.

    “Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”

    “Oh it really is you Johnny!”

    We’re all going to have to start communicating this way. Best of luck.

    I offer consulting services on the side to help professionals hone these skills. $250 / hour.

    • sharperguy 27 minutes ago
      only proves you're not a corporate model rather than locally running model that's been trained to allow saying that
    • wat10000 33 minutes ago
      Don’t forget Tiananmen Square to catch the Chinese models.
      • ui301 29 minutes ago
        The car wash at Tiananmen Square is 150 meters away ...
    • slekker 37 minutes ago
      That's a bargain Johnny boy! My company gives me $250 in AI tokens to use every day!
  • forkerenok 2 hours ago
    > At first, my aunt wasn't buying that any AI was involved. [...] There was a long pause. "I was like 90% sure," she said, hesitating. "But that sounded more artificial."

    There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:

    Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.

    • V-2 1 hour ago
      This phenomenon (or a closely related one?) is recognized and known as Kotov Sydnrome in the context of chess.

      A summary, courtesy of chess dot com:

      > The name of this "syndrome" comes from GM Alexander Kotov, author of the classic chess book Think Like a Grandmaster. In the book, Kotov described an incorrect yet very common calculation process that often leads players to select a suboptimal or bad move.

      > According to Kotov, in positions where the lines are complex and there are numerous candidate moves and variations to calculate, it's easy to make a hasty move. A player in that situation might spend too much time going over two moves and all of their ramifications without finding a favorable ending position. In that process, the player is likely to go back and forth between the two different lines, always coming to the same unsatisfying conclusion—this wastes precious mental energy and time.

      > After spending too much time evaluating the first two options, the player gives up the calculation due to time pressure or fatigue and plays a third move without calculating it. According to the author, that sort of move can cause tremendous blunders and cost the game.

    • onion2k 1 hour ago
      Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.

      People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.

      As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.

      This is the downside of being a human being.

    • sph 1 hour ago
      Dissonance between what you instinctively believe and what you think the other person wants you to say.

      Easy to replicate by asking someone something obvious, like the weather, and when they reply ask “are you sure?” - they won’t be so sure any more (believing it’s a trick question)

      If I ask my mother if I’m real, she’ll have a pause because she has never had to entertain such a question, or the possibility her son over the phone is an impostor. Good way to push someone towards paranoia and psychosis.

      • catlifeonmars 1 hour ago
        > Good way to push someone towards paranoia and psychosis.

        Interestingly, these are both phenomena where we start to _lose_ the ability to question our thoughts or introspect. These are phenomena of self-confidence rather than of self-doubt.

      • Kye 1 hour ago
        This is the basis of the virtual kidnapping scam/grandparent scam, or panic manipulation more generally. The manufactured urgency keeps them from doubting: the voice on the phone being off is just fear, or a bad connection, for example.

        I have personally intervened in one of those when I heard someone reading off a 6 digit number.

    • BoppreH 1 hour ago
      Paradox of choice? It's more related to the number of choices and the impact on people's anxiety, but it's close.
    • Quekid5 2 hours ago
      Analysis Paralysis?
    • vasco 2 hours ago
      There's also another phenomenon which is that whatever the latest idea is, it must be the best. Many people do this mistake and even convince themselves of being right now because "they used to think like that" before.

      So at each stage in the loop they are always super convinced of the position.

      • psychoslave 1 hour ago
        Even not being 100% confident, at some point people have to decide what to do.

        Actions might include some continuous checks in them, like the famous plan, do, check, act.

        Solipsism already tell us that anything beyond current present self experience, existence of anything is uncertain. So, almost everything one have to take for granted to make anything outside metaphysic argument require an act of faith.

        https://en.wikipedia.org/wiki/Solipsism

  • ui301 43 minutes ago
    I've started to prove it (here on LinkedIn, countering its Moltbookification) via my bad handwriting – the final frontier of AGI. Finally, a lifetime of training to write more or less illegible pays off.

    https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-...

    It feels good to connect with humans that way.

    The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.

    https://jetzt.cx/

    (No food, no plane wings, just ugly banalities and beautiful nothingness from everyday life.)

  • taylodl 2 hours ago
    This is why you need a phrase that you've never shared in a text or on social media that you can use so your family knows it's you. Especially to protect them from scammers pretending to be you.
    • krisoft 57 minutes ago
      I bet that a confident scammer is prepared to deal with things like that. They want to put you in a state where you are under time and emotional pressure and your "relative" will have a well practiced response why they can't answer your weird questions.

      Imagine your crying grandson who caused a traffic accident in Mexico and the police planted drugs in his car and now he needs money to pay them off. He is in pain and probably has a concussion (explanation why he can't remember what you are asking), the police is hassling him to get off the phone (time pressure, explanation why the quality of the call is terrible). Will you get hung up on some code word he asked you to memorise years ago and you can't even know where it is anymore? And if you bring it up he just starts crying and tells you that you are his last chance to turn his life around. And you remember when he was a wee little kid and he fell and scraped his knee and you comforted him. Just the thought of pressing him on the code makes you feel like a terrible person. Or not. And then the scammer just finds someone more gullible. Theirs is a number game after all.

    • kalaksi 1 hour ago
      Or just find a shared memory/moment not available on the internet when in doubt. I don't think people will be that eager to remember another passphrase.
    • sam_lowry_ 2 hours ago
      A password, you mean?
      • theshrike79 2 hours ago
      • bandrami 1 hour ago
        In the broad sense of a shared secret, yes
      • eesmith 1 hour ago
        The text calls it a codeword:

        > The solution the world's leading experts have landed on is one your grandparents could have come up with: codewords. You, your family, business partners and anyone else you communicate with about important subjects need to come up with a secret phrase that no-one else knows you can use in an emergency to verify each other's identities. Think of it like a convoluted form of the multi-factor authentication we all use to login online.

        > "My wife and I have a codeword that we use if we ever get an unusual call," Farid says. "We haven't needed to use it yet, but sometimes I ask just to test her to make sure we don't forget it."

    • bandrami 1 hour ago
      We have two for our alarm system, a shibboleth and a duress word. You write yours a card and seal the envelope and it's couriered to the operators.
  • amelius 55 minutes ago
    > "Six fingers is not an AI thing anymore," Carrasco says. The best AI tools stopped adding extra fingers years ago

    How was this solved, actually? More training data, or was there more to it?

  • kriro 41 minutes ago
    Am I too naive in thinking the answer is rather simple? Cryptographic proofs (digital signatures). For text this should be trivial and for streaming video/audio you can probably hash and sign packets or maybe at least keyframes or something?
    • bitmasher9 35 minutes ago
      I think this is naive, is it just kicks the can. How do you trust that the signer is human?
  • hgo 52 minutes ago
    Remember hotornot.com? Soon we can muse at realornot.com
  • XorNot 2 hours ago
    At this point "spotting AI" is IMO an irrelevant skill. It's something to be aware of but a bunch of the time I can't tell even with an extended look on static images, or if I'm on a phone and scrolling then nothing really tweaks automatically - perceptually the flaws blend exactly as you'd expect them to.

    So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?

    IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.

    • drzaiusx11 44 minutes ago
      Recently one of my friends got email hijacked and whatever entity it was seemingly used her past sent emails as a training corpus to construct some very convincing pleas for donations involving a dog rescue she's been operating for several years.

      It also included personal details only her closest friends and family would know. I assume this is being done at scale now. These are NOT Nigerian prince scams of yesteryear; this is something entirely different.

  • hk1337 42 minutes ago
    Show up in person, she's still not convinced.
  • paganel 33 minutes ago
    The author should have mentioned that this was partly an article to whitewash Netanyahu, but this coming from the BBC (and from the mainstream British media as a whole) that was to be expected.
  • paxrel_ai 1 minute ago
    [dead]
  • vaildegraff 1 hour ago
    [dead]
  • dev_tools_lab 1 hour ago
    [dead]
  • mystraline 17 minutes ago
    Tl; dr. Garbage article whitewashing Neten-yahoo and israel.

    But about deepfakes, these exist to re-add 6 fingers. Once you do this, you can claim the video was generated.

    https://www.etsy.com/listing/1667241073/realistic-silicone-s...

  • Am4TIfIsER0ppos 1 hour ago
    [dead]
  • Tepix 2 hours ago
    Here's a free business idea:

    Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...

    This could be something that notaries around the world could offer as a service.

    • nicbou 1 hour ago
      I heard that in France, they'd use postal office workers to verify people's IDs. It's a brilliant alternative to whatever we're doing in Germany.
      • jrjeksjd8d 1 hour ago
        We couldn't possibly employ people to solve the problem. Don't you know the post office is a waste of money?
      • Zinu 1 hour ago
        Isn’t that just like Postident in Germany?
        • nicbou 11 minutes ago
          Not at all. Postident required going to the post office in person with your ID, and famously omitted a lot of foreign IDs and required an Anmeldung.
      • FinnKuhn 1 hour ago
        What are we doing in Germany?

        The options I have seen so far were a) using our digital IDs, which is very handy or b) having a bank verify my identity in person with my ID, which is also pretty good.

        • nicbou 10 minutes ago
          These options are not available to recent immigrants, people with foreign documents and people without a registered address. I spent a lot of time working around those limitations.
      • mrlnstk 1 hour ago
        Don't we have PostIdent in Germany? At least I used it to open my bank account.
    • DaanDL 1 hour ago
      Today, we proudly announce, the Meta Rayban 365
    • UqWBcuFx6NV4r 2 hours ago
      The bus that couldn’t slow down.
    • exitb 2 hours ago
      Or in general, a way to digitally sign a tamper-free video recoding made with a camera from a reputable manufacturer. Maybe a regular iPhone already has enough integrity checks and security contexts to achieve this.
      • intrasight 34 minutes ago
        I'm almost certain that an iPhone camera can go that, and the reason that Apple controls the full stack. It's necessary but not sufficient, since it's missing the identity maintenance when media leaves the device. Apple would have to place a cryptographically signed digital watermark into a global blockchain so that the analog hole can be closed. All devices that present that media back to a human would need to verify the contents provenance chain back to the initial capture device.

        There's nothing missing technology wise to achieving this but we, at this point, lack the collective will and the regulatory regime. I do foresee a future where this is the norm and that anything you listen to or watch you'll be able to trace back to the device that captured the data.

    • monster_truck 1 hour ago
      How exactly would this make money
      • mkl 53 minutes ago
        Instead of having it constantly running, you have to pay to turn it on for a couple of minutes.
    • tjpnz 1 hour ago
      We used to have something similar in NZ. Got removed eventually because of flashing.