• 1 Post
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • In the Wikipedia page there’s this formula

    Just plug in values for the fence voltage and led. You can just set vswitch to 0 and Vled to 2V. It should be fairly insignificant compared to the fence voltage. It should be 10-20mA (0.01-0.02) to not kill the led.

    The root of this is Ohms law: V=IR. Diodes cause a voltage drop rather than directly acting like a resistor which is why it’s subtracted from the input voltage.

    Just put them in series. Make 2x, one in each direction to account for AC and DC


  • jjagaimo@lemmy.onetoThe Agora@sh.itjust.works*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    There are companies that do this with IDs but they are typically already global corporations or SSL certificate authorities already. One example is Verisign and another is Globalsign. Their products are unsuitable however because it connects your real identity to the account. It could be useful for a one time humanness verification though.

    The main goal would be to decouple the humanness check from Lemmy and give it over to an authority meant just to create certificates which cannot be linked back to the person. You could probably rate limit each person after the human check for creating new certificate. This would allow creating alts but limit the number of bots one person could create, as theyd need to pass the automate the verification.

    One issue would be trust because you would need to trust the authority saying that the person who created the certificate was human


  • jjagaimo@lemmy.onetoThe Agora@sh.itjust.works*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I literally addressed this. My point is that we’d need to give personal identifying information to be 100% sure, so the best way at the moment would instead be to just verify humanness as best as possible (e.g. better captcha, AI/chatgpt response detection, etc.) and shift the account sign up to the authority’s side, accepting <100% unique individuals making accounts and prevent bots in other ways.

    Also “trusted organizations handling your data” is exactly how 99% of the modern internet works. Rarely if ever do we give thought to the fact that companies like Verisign exist, nor that people regularly give credit card information to websites. At the same time, companies and corporations arent just some random schmuck spinning up their own authentication service



  • A public/private key pair is more effective. Thats how “https” sites work. SSL/TLS uses certificates to authenticate who is who. Every site with https has a SSL certificate which basically contains the public key of the site. The site can then use its private key to sign all data it sends to you, and you can verify that it actually came from them by trying to decrypt it with their public key. Certificates are granted by a certificate authority, which are basically the identity service you are talking about. Certificates are usually themselves signed by the certificate authority so that you can tell that someone didnt just man-in-the-middle-attack you and swap out the certificate, and the site can still directly serve you the certificate instead of you needing to go elsewhere to find the certificate

    The problem with this is severalfold. You would need some kind of digital identity organization(s) to be handling sensitive user data. This organization would need to

    1. Be trusted. Trust is the key to having these things work. Certificate authorities are often large companies with a vested interest in having people keep business with them, so they are highly unlikely to mess with people’s data. If you can’t trust the organization, you can’t trust any certificate issued or signed by them.

    2. Be secure. Leaking data or being compromised is completely unnaceptable for this type of service

    3. Know your identity. The ONLY way to be 100% sure that it isnt someone just making a new account and a new key or certificate (e.g. bots) would be to verify someone’s details through some kind of identification. This is pretty bad for several reasons. Firstly it puts more data at risk in the event of a security breach. Secondly there is the risk of doxxing or connecting your real identity to your online identity should your data be leaked. Thirdly it could allow impersonation using leaked keys (though im sure theres a way to cryptographically timestamp things and then just mark the key as invalid). Fourth, you could allow one person to make multiple certificates for various accounts to keep them separately identifiable, but this would also potentially enable making many alts.

    There may be less agressive ways of verifying individual humanness of a user, or just preventing bots as in that 3rd point may be better. For example, a simple sign up with questions to weed out bots, which generates an identity (certificate / key) which you can then add to your account. That would then move the bot target from various lemmy instances, solely to the certificate authorities. Certificate authorities would probably need to be a smaller number of trusted sources, as making them “spin up your own” means that anyone could do just that, with less pure intentions or modified code that lets them impersonate other users as bots. That sucks because it goes against the fundamental idea that anyone should be able to do it themselves and the open source ideology. Additionally, you would need to invest in tools to prevent DDOS attacks and chatgpt bots.

    There most certainly exists user authentication authorities, however it wouldn’t surprise me a bit if there were no suitable drop in solutions for this. This in and of itself is a fairly difficult project because of the scale needed to start as well as the effort put into verifying users are human. It’s also a service that would have to be completly free to be accepted, yet cannot just shut down at risk of preventing further users from signing up. I considered perhaps charging instances a small fee (e.g. $1/mo) if they have over a certain threshold of users to allow issuing further certificates to their instance, but its the kind of thing I think would need to be decoupled from Lemmy to have a chance of surviving through more widespread use.