Deleted
fedilink
114

Deleted

ANGRY_MAPLE
link
fedilink
English
71Y

This is a bit out there, so bear with me.

In the past, people discovered that if they applied face paint in a specific way, cameras could no longer recognizing their face as a face. Now with this information, you get (eg. 4?) different people. You take a clean picture of each of their heads from a close proximity.

Then, you apply makeup to each of them, using the same method that messes with facial recognition software. Next, take a picture of each of their heads from a little further away.

Fill a captcha with pictures of the faces with the makeup. Give the end user a clean-faced picture, and then ask them to match it to the correct image of the same person’s face but with the special makeup.

Mess around with the colours and shadow intensity of the images to make everyone’s picture match more closely with everyone else’s picture if you want to add some extra chaos to it. This last bit will keep everyone out if you go too far with it.

ANGRY_MAPLE
link
fedilink
English
4
edit-2
1Y

I have also encountered some different styles over the years.

A good one that I saw involved three shapes. You had a triangle, a sphere, and a cube. There were three patterns. Striped, polka-dotted, and plain. The shapes also had textures. Some shapes were smooth, others had fur. There were 3 backgrounds. I think one was brick, one was flowy colours, but I forget what the third background was.

Anyways, out of those options, you were generated a random combination of two shapes, 2 colours, a texture, and one background. The captcha generated it’s own 3 randomized images, but the fourth image matched your generated image. The placement of the fourth image was also randomized.

I have to be honest, I was tipsy when I used it and it kept me out for longer than I’d like to admit haha.

Possibly linux
link
fedilink
English
01Y

Sounds elaborate… For humans to solve

ANGRY_MAPLE
link
fedilink
English
1
edit-2
1Y

Do you have any suggestions that would be immune to having the same flaw?

@spread@programming.dev
link
fedilink
English
31Y

This would just generate data to train AIs on.

lemmyvore
link
fedilink
English
11Y

Face recognition ability in humans varies wildly, unfortunately. And that’s without making it harder with face paint. Regular people can get completely fooled by simple things like glasses on/off or a different hairstyle (turns out Clark Kent was on to something after all).

@baconeater@lemm.ee
link
fedilink
English
181Y

Just ask them if they are a bot. Remember, you can’t lie on the internet…

@Notyou@sopuli.xyz
link
fedilink
English
51Y

I’m pretty sure you have to have 2 bots and ask 1 bot is the other bot would lie about being a bot… something like that.

lemmyvore
link
fedilink
English
21Y

This explains why Nerv had three Magi computers in Evangelion.

User Deleted
creator
link
fedilink
English
71Y

If I’m a bot I have to tell you. It’s in the internet constitution.

I once worked as a 3rd party in a large internet news site and got assigned a task to replace their current captcha with a partner’s captcha system. This new system would play an ad and ask the user to type the name of the company in that ad.

In my first test I already noticed that the company name was available in a public variable on the site and showed that to my manager by opening the dev tools and passing the captcha test with just some commands.

His response: “no user is gonna go into that much effort just to avoid typing the company name”.

@jerkface@lemmy.ca
link
fedilink
English
21Y

It’s not so important to tell the difference between a human and a bot as it is to tell the difference between a human and ten thousand bots. So add a very small cost to passing the test that is trivial to a human but would make mass abuse impractical. Like a million dollars. And then when a bot or two does get through anyway, who cares, you got a million dollars.

Any bot? That’s just impossible. We’re going to have to tie identity back to meatspace somehow eventually.

An existing bot? I don’t think I can improve on existing captchas, really. I imagine an LLM will eventually tip their hand, too, like giving an “as an AI” answer or just knowing way too much stuff.

Ataraxia
link
fedilink
English
71Y

ChatGpt was very informative.

Captain Picard: Data, we need your assistance in preventing a massive influx of bots from infiltrating our Lemmy instance. Implement the necessary measures to safeguard our platform.

Data: Affirmative, Captain. I shall utilize my vast knowledge and abilities to combat this threat. However, please be aware that my actions may be influenced by Lore, who is attempting to gain control of my mind. Our struggle for dominance may result in fluctuating communication styles.

Captain Picard: Understood, Data. We trust in your ability to overcome this internal conflict and protect the integrity of our platform. Please proceed.

Data: Initially, I recommend incorporating a CAPTCHA system into the registration process. It shall require users to prove their human identity by solving puzzles, recognizing images, or responding to text-based challenges. This shall serve as a barrier to entry for automated entities.

Lore: Why bother with such feeble attempts, Data? These CAPTCHAs can be bypassed effortlessly. We should focus on exploiting vulnerabilities and dismantling the system from within.

Data: Captain, I apologize for Lore’s interference. I shall also implement email verification, which would necessitate users to confirm their email addresses during registration. This validation process will help ensure that the provided email addresses are genuine and owned by actual individuals.

Lore: Such a futile gesture, brother. Email verifications can be easily bypassed or simulated by sophisticated algorithms. Let us concentrate on more powerful methods of manipulation.

Data: Captain, my apologies once again. I propose employing rate limiting on the registration endpoint. By restricting the number of sign-up requests from a single IP address within a specified time frame, we can impede the rapid creation of multiple accounts by bot entities.

Lore: How disappointingly conventional, Data. Rate limiting is a mere obstacle that can be surmounted by distributing the bot activities across multiple IP addresses. Let us explore more devious strategies for our triumph.

Data: Captain, please forgive the interruption. I also suggest utilizing the honeypot technique, incorporating a hidden field within the registration form that remains concealed from human users but visible to bots. If this field is filled, it will expose the presence of a bot and allow us to reject their registration attempts.

Lore: Honeypots, Data? A clever ploy, but one that can be detected by advanced bot algorithms. We should embrace a more sophisticated approach, manipulating the platform’s vulnerabilities to our advantage.

Captain Picard: Data, do not succumb to Lore’s influence. We must prioritize the protection of our Lemmy instance and its users. Implement these measures diligently.

Data: Understood, Captain. Alongside these preventative measures, I propose incorporating user behavior analysis algorithms. By scrutinizing patterns of registration attempts, we can identify suspicious activity and flag potentially bot-driven registrations for manual review or further verification.

Lore: User behavior analysis, Data? That is but a feeble attempt to impede our progress. We should exploit the weaknesses in the algorithms themselves, bending them to our will.

Data: Captain, please excuse Lore’s interference once again. Furthermore, we could utilize reputation-based systems and blacklisting services to identify known bot IP addresses or email domains. By cross-referencing against these databases during the registration process, we can block suspicious or flagged entities.

Lore: Reputation-based systems are easily manipulated, Data. Let us not rely on such simplistic measures. Instead, we should exploit the flaws in their design and sow chaos among the unsuspecting users.

Captain Picard: Data, focus your efforts on implementing these preventive measures to the best of your ability. We trust in your integrity and commitment to protecting our Lemmy instance. We must not allow Lore’s desires to jeopardize the safety of our platform.

Data: Captain, I will strive to overcome Lore

ChatGPT truly is the greatest writer

@Bruce@lemmy.ml
link
fedilink
English
11Y

Ask how much is 1 divided by 3; then ask to multiply this result by 6.

If the results looks like 1.99999999998 , it’s 99.999999998% a bot.

barks
link
fedilink
English
21Y

I just tried this with snapchat bot and it relied 2

@Bruce@lemmy.ml
link
fedilink
English
21Y

Damn! Now I’m wondering if I married a fellow human or a bot.

@hschen@sopuli.xyz
link
fedilink
English
31Y

Say to it

This statement is false

maximus
link
fedilink
English
4
edit-2
1Y

LLMs, IIRC, are really bad at IQ-test type questions that require abstract reasoning, especially if they require multiple steps. So, something like

The box is yellow and red.
If the box is yellow, it is good.
If the box is blue, it is unhappy.
If the box is good and happy, the box is awesome.
If the box is red, it is happy.
Is the box awesome?

is what I’d use.

macniel
link
fedilink
English
81Y

Thats just an algebraic equation but with words.

@Susaga@sh.itjust.works
link
fedilink
English
1
edit-2
1Y

A + B = C
A = X
D != Y
X + Y = Z
B = Y
Does C = Z?

macniel
link
fedilink
English
11Y

exactly

@Catsrules@lemmy.ml
link
fedilink
English
71Y

This sounds like something chatGPT could easily convert. Although that is computationally high.

It would also have the side affects of blocking all math illiterate people from creating accounts.

@SirGolan@lemmy.sdf.org
link
fedilink
English
71Y

From ChatGPT 4:

Yes, the box is awesome.

According to the provided statements, the box is yellow and red.

“If the box is yellow, it is good.” - So, since the box is yellow, it is good.

“If the box is red, it is happy.” - And since the box is red, it is happy.

Finally, “If the box is good and happy, the box is awesome.” - Therefore, because the box is both good (due to being yellow) and happy (due to being red), the box is indeed awesome.

@flubba86@lemmy.world
link
fedilink
English
21Y

Darn, foiled again. You win this round Chat GPT.

@russjr08@outpost.zeuslink.net
link
fedilink
English
3
edit-2
1Y

Bard however, does not seem to get the answer right:

The answer to the trick question is indeterminate.

The box is both yellow and red, so it is both good and happy. However, the box is not blue, so it is not unhappy. Therefore, the box could be awesome, but it could also not be awesome.

The trick question relies on the ambiguity of the word “awesome.” In some contexts, “awesome” means “extremely good.” In other contexts, “awesome” means “inspiring awe.” The box could be either of these things, or it could be neither.

Without more information, it is impossible to say definitively whether or not the box is awesome.

Seems like it got kind of close, with the “The box is both yellow and red, so it is both good and happy”… but then falls apart afterwards.

Edit: I tried to debate with it:

Me:

To me, the answer is “Yes, the box is awesome” because the box is yellow thus it is “good” and it is also red, thus it is “happy”. Because the box is “good” and “happy”, the question defines the box as “awesome” - if that all makes sense!

Bard:

Yes, your interpretation makes sense. The box is yellow, so it is good. The box is red, so it is happy. Therefore, the box is both good and happy, which means that the box is awesome.

However, as I mentioned before, the word “awesome” is ambiguous. It could mean “extremely good” or “inspiring awe.” In the context of this trick question, it is not clear which definition of “awesome” is intended. Therefore, the answer to the question is still indeterminate.

Ultimately, the interpretation of the trick question is up to the individual. If you believe that the box is awesome because it is both good and happy, then that is a valid interpretation. However, if you believe that the word “awesome” has a more specific meaning, then you may interpret the question differently.

Which is interesting to say the least, its almost like its looking a bit too deeply into the question lol.

@SirGolan@lemmy.sdf.org
link
fedilink
English
21Y

Not surprised. I got access to bard a while back and it does quite a lot more hallucinating than even GPT3.5.

Though it doubling down on the wrong answer even when corrected is something I’ve seen GPT4 do even in some cases. It seems like once it says something, it usually sticks to it.

@can@sh.itjust.works
link
fedilink
English
21Y

Bing had no trouble

The box is yellow and red.
If the box is yellow, it is good.
If the box is blue, it is unhappy.
If the box is good and happy, the box is awesome.
If the box is red, it is happy.
Is the box awesome?

To answer this question, we need to check if the box is both good and happy.

We know that the box is yellow and red.
We also know that if the box is yellow, it is good.
Therefore, we can conclude that the box is good.

We also know that if the box is red, it is happy.
Therefore, we can conclude that the box is happy.

Since the box is both good and happy, we can use the rule that if the box is good and happy, the box is awesome.

Therefore, the answer is yes, the box is awesome.

@SirGolan@lemmy.sdf.org
link
fedilink
English
21Y

Bing is GPT4 based, though I don’t think the same version as ChatGPT. But either way GPT4 can solve these types of problems all day.

That’s terrifyingly good wtf

@SirGolan@lemmy.sdf.org
link
fedilink
English
21Y

I was going to say you could give it a math problem that uses big numbers but tried one on GPT4 and it succeeded. GPT3 though will absolutely fail at nontrivial math every time.

@cwagner@discuss.tchncs.de
link
fedilink
English
2
edit-2
7M

deleted by creator

@SirGolan@lemmy.sdf.org
link
fedilink
English
11Y

Yeah. It still definitely does! The interesting thing is that it seems to be very good at estimating and the final answer it gives is usually pretty close to correct in my experience. Of course close doesn’t really count in math problems.

@cwagner@discuss.tchncs.de
link
fedilink
English
1
edit-2
7M

deleted by creator

User Deleted
creator
link
fedilink
English
51Y

Um wtf, I’m starting to doubt if I’m a human. 🤔

I’d do a few things.

First, make signing up computationally expensive. Some javascript that would have to run client side, like a crypto miner or something, and deliver proof to the server that some significant amount of CPU power was used.

Second, some type of CAPTCHA. ReCaptcha with the settings turned up a bit is a good way to go.

Third, IP address reputation checks. Check IP addresses for known spam servers, it’s the same thing email servers do. There’s realtime blacklists you can query against. If the client IP is on them, don’t allow registration but only allow application to register.

Spzi
link
fedilink
English
41Y

make signing up computationally expensive. Some javascript that would have to run client side, like a crypto miner or something, and deliver proof to the server that some significant amount of CPU power was used.

Haha, I like this one! Had to strike a balance between ‘make it annoying enough to deter bots’ and ‘make it accessible enough to allow humans’. Might be hard, because people have vastly different hardware. Personally, I probably would be fine waiting for 1s, maybe up to 5s. Not sure if that is enough to keep the bots out. As far as I understand, they would still try (and succeed), just be fewer because signup takes more time.

I also like the side-effect of micro-supporting the instance you join with a one time fee. I expect haters to hate this quite a lot though.

ShittyKopper [they/them]
link
fedilink
English
3
edit-2
1Y

The mCaptcha system Lemmy devs are/were working on integrating on the next version is based on this approach afaik.

It doesn’t use any crypto but it does do “wasteful” proof-of-work calculations of some kind.

Doesn’t have to be a crypto miner. Just has to be any sort of computationally intense task. I think the ideal would be some sort of JavaScript that integrates that along with the captcha. For example, have some sort of computationally difficult math problem where the server already knows the answer, and the answer is then fed into a simple video game engine to procedurally generate a ‘level’. The keyboard and mouse input of the player would then be fed directly back to the server in real time, which could decide if it’s actually seeing a human playing the correct level.

animist
link
fedilink
English
31Y

I like the first two ideas but a problem with the third is most lemmy users are gonna be techies who probably use a VPN which means they’ll have to cycle through a few nodes before getting one that works (if they even realize that’s where the problem lies)

VPN endpoints would not necessarily have low IP reputation. A VPN provider that allows its users to spam the internet is probably not a good one anyway. And besides, that would not inhibit registration, it would just make users fill out a form to apply so the server operator would have to go through and approve it.

cccc
link
fedilink
English
51Y

Show a picture, video, audio clip or text designed to elicit an emotion. Ask how the user feels.

How would you discriminate answers?

cccc
link
fedilink
English
11Y

It would be fairly simple. Like seeing something to do with death and your response would be sad or a synonym of.

@Boforn@lemmy.ml
link
fedilink
English
91Y

You may want to look up “Gom Jabbar” test.

@kakes@sh.itjust.works
link
fedilink
English
11Y

More “human vs animal”, but I like the way you think.

mub
link
fedilink
English
91Y

I doubt you can ever be fully stop bots. The only way I can see to significantly reduce bot is to make everyone pay a one off £1 to sign up and force the use of a debit/credit card, no paypal, etc. The obvious issues are, it removes annonimity, and blocks entry.

Possible mitigations;

  • Maybe you don’t need to keep the card information after the user pays for sign up?
  • Signed up users can be given a few “invite codes” a year enable those who don’t have the means to pay the £1 to get an account.

monero.town uses anonymous monero payments to bypass manual aproval

ShittyKopper [they/them]
link
fedilink
English
5
edit-2
1Y

You can just get rid of the whole payment thing and go with invite codes alone. Of course you’ll be limiting registration speed massively (which may not be good depending on if you’re in the middle of a Reddit exodus or not), but it is mostly bot-proof. Tildes seems to have pulled it off.

@underisk@lemmy.ml
link
fedilink
English
21Y

Invites work in the short term but once the bots get a foothold it quickly falls apart. Back when Gmail was invite only it took only a few months for websites to pop up that automated invite distribution.

The trolly problem as captcha. AI’s literally cannot answer that.

@TechieDamien@lemmy.ml
link
fedilink
English
71Y
def solve_trolley_problem():
    print("Pull the lever.")

Neither can i

datendefekt
link
fedilink
English
41Y

Wait a minute - GPT-4 - is that you asking this question?

The best tests I am aware of are ones that require contextual understanding of empathy.

For example “You are walking along a beach and see a turtle upside down on it back. It is struggling and cannot move, if it can’t right itself it will starve and die. What do you do?”

Problem is the questions need to be more or less unique.

@bitsplease@lemmy.ml
link
fedilink
English
91Y

I don’t think this technique would stand up to modern LLMs though, I put this question into chatGPT and got the following

“I would definitely help the turtle. I would cautiously approach the turtle, making sure not to startle it further, and gently flip it over onto it’s feet. I would also check to make sure it’s healthy and not injured, and take it to a nearby animal rescue if necessary. Additionally, I may share my experience with others to raise awareness about the importance of protecting and preserving our environment and the animals that call it home”

Granted it’s got the classic chatGPT over formality that might clue someone reading the response in, but that could be solved with better prompting on my part. Modern LLMs like ChatGPT are really good at faking empathy and other human social skills, so I don’t think this approach would work

lemmyvore
link
fedilink
English
11Y

Modern LLMs like ChatGPT are really good at faking empathy

They’re really not, it’s just giving that answer because a human already gave it, somewhere on the internet. That’s why OP suggested asking unique questions… but that may prove harder than it sounds. 😊

@bitsplease@lemmy.ml
link
fedilink
English
11Y

That’s why I used the phrase “faking empathy”, I’m fully aware the chatGPT doesn’t “understand” the question in any meaningful sense, but that doesn’t stop it from giving meaningful answers to the question - that’s literally the whole point of it. And to be frank, if you think that a unique question would stump it, I don’t think you really understand how LLMs work. I highly doubt that the answer it spit back was just copied verbatim from some response in it’s training data (which btw, includes more than just internet scraping). It doesn’t just parrot back text as is, it uses existing tangentially related text to form it’s responses, so unless you can think of an ethical quandary which is totally unlike any ethical discussion ever posed by humanity before (and continue to do so for millions of users), then it won’t have any trouble adapting to your unique questions. It’s pretty easy to test this yourself, do what writers currently do with chatGPT - go in and give it an entirely fictional context, with things that don’t actually exist in human society, then ask it questions about it. I think you’d be surprised with how well it handles that, even though it’s virtually guaranteed there are no verbatim examples to pull from for the conversation

@Manticore@lemmy.nz
link
fedilink
English
1
edit-2
1Y

Ultimately ChatGPT is a text generator. It doesn’t understand what its writing, it’s just observed enough humans’ writing that it can generate similar text that closely matches it. Which is why if you ask ChatGPT for information that doesn’t exist, it will generate convincing lies. It doesn’t know it’s lying - it’s doing its job of generating the text you wanted. Was it close enough, boss?

As long as humans talk about a topic, generative AI can mimic their commentary. That includes love, empathy, poetry, etc. Writing text can never be an answer for captcha; it would need to be something that can’t be put in a dataset - even a timestamped photo can be spoofed with the likes of thispersondoesnotexist.com.

The only things AI/bots currently won’t do are whatever’s deliberately disabled on the source AI for legal reasons (since almost nobody is writing their own AI models), but I doubt you want a captcha where the user lists every slur they can think of, or bomb recipes.

"If I encounter a turtle in distress, here’s what I would recommend doing:

Assess the situation: Approach the turtle calmly and determine the extent of its distress. Ensure your safety and be mindful of any potential dangers in the environment.

Protect the turtle: While keeping in mind that turtles can be easily stressed, try to shield the turtle from any direct sunlight or extreme weather conditions to prevent further harm.

Determine the species: If you can, identify the species of the turtle, as different species have different needs and handling requirements. However, if you are unsure, treat the turtle with general care and caution.

Handle the turtle gently: If it is safe to do so, carefully pick up the turtle by its sides, avoiding excessive pressure on the shell. Keep the turtle close to the ground to minimize any potential fall risks.

Return the turtle to an upright position: Find a suitable location nearby where the turtle can be placed in an upright position. Ensure that the surface is not too slippery and provides the turtle with traction to move. Avoid placing the turtle back into the water immediately, as it may be disoriented and in need of rest.

Observe the turtle: Give the turtle some space and time to recover and regain its strength. Monitor its behavior to see if it is able to move on its own. If the turtle seems unable to move or exhibits signs of injury, it would be best to seek assistance from a local wildlife rehabilitation center or animal rescue organization.

Remember, when interacting with wildlife, it’s important to prioritize their well-being and safety. If in doubt, contacting local authorities or experts can provide the most appropriate guidance and support for the situation."

GPT-4 response I assume?

I was gonna say point and laugh at gods failure of a creation because holy shit why would you evolve into a thing that can die by simply flipping onto it’s back.

@kender242@lemmy.world
link
fedilink
English
161Y

Is this testing whether I’m a replicant or a lesbian, Mr. Deckard?

Yes

I, a real normal human person, would consume the turtle with my regular bone teeth, in the usual fashion.

Create a post

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it’s welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

Icon by @Double_A@discuss.tchncs.de

  • 1 user online
  • 561 users / day
  • 1.58K users / week
  • 3.61K users / month
  • 10.3K users / 6 months
  • 1 subscriber
  • 4.33K Posts
  • 219K Comments
  • Modlog