You’ve misunderstood. With the client code you can be sure that your messages are properly encrypted before leaving the device. If that’s done correctly, you don’t need to trust the server, because it can’t read your messages just like some attacker couldn’t. Signal is pretty similar, they didn’t update the public server source for a few years, and even with the source, we can’t know that that is what they’re actually running. But with a verified build of the client code we can know that our messages are encrypted such that, even if they held on to them until quantum computers became mainstream, they’d still be properly protected.
the server can store metadata though. who you’re texting, when, how often, etc. - and store that indefinitely. or even store the encrypted message, and when a flaw in the encryption is discovered 10 years later, they’re all readable. their servers could be breached and that info could be siphoned by criminals selling it to the highest bidder.
signals blog had an interesting post about what they’re doing to prevent these issues
You’ve misunderstood. With the client code you can be sure that your messages are properly encrypted before leaving the device. If that’s done correctly, you don’t need to trust the server, because it can’t read your messages just like some attacker couldn’t.
It kind of depends on how keys are handled. If the key passes through their servers at all (and it probably does,) then they have access to the keys and sufficient information to decrypt it. it’s possible the app does send keys independent of their server- I don’t know- but I very much doubt it. if they were capable of sending keys without a server, chances are very good they don’t actually need the server for the messages themselves. (which would then ask why they do have a server.)
But with a verified build of the client code we can know that our messages are encrypted such that, even if they held on to them until quantum computers became mainstream, they’d still be properly protected.
Assuming they don’t have the keys. This is not a valid assumption so far as I’m aware.
If the key passes through their servers at all (and it probably does,) then they have access to the keys and sufficient information to decrypt it. it’s possible the app does send keys independent of their server- I don’t know- but I very much doubt it.
The keys shouldn’t be on or go through a server anywhere, that would be an absolute joke.
What makes you think that private keys are being sent anywhere? This app uses a slightly modified version of the Signal protocol (because of course it does), as they describe here, section 27, page 90. Only public keys should ever leave your device, otherwise no amount of showing the code would make it secure. That’s the whole point.
Again, with the client code you should be able to tell that the keys are generated there and not sent anywhere.
As I said, with any app, just because they publish some server code does not mean that that’s what they’re running on their server - for security you have to be sure that the app is sufficiently secure on its own. Even if they were running the exact public code that “didn’t save the keys” the server could harvest them from memory.
You’ve misunderstood. With the client code you can be sure that your messages are properly encrypted before leaving the device. If that’s done correctly, you don’t need to trust the server, because it can’t read your messages just like some attacker couldn’t. Signal is pretty similar, they didn’t update the public server source for a few years, and even with the source, we can’t know that that is what they’re actually running. But with a verified build of the client code we can know that our messages are encrypted such that, even if they held on to them until quantum computers became mainstream, they’d still be properly protected.
the server can store metadata though. who you’re texting, when, how often, etc. - and store that indefinitely. or even store the encrypted message, and when a flaw in the encryption is discovered 10 years later, they’re all readable. their servers could be breached and that info could be siphoned by criminals selling it to the highest bidder.
signals blog had an interesting post about what they’re doing to prevent these issues
It kind of depends on how keys are handled. If the key passes through their servers at all (and it probably does,) then they have access to the keys and sufficient information to decrypt it. it’s possible the app does send keys independent of their server- I don’t know- but I very much doubt it. if they were capable of sending keys without a server, chances are very good they don’t actually need the server for the messages themselves. (which would then ask why they do have a server.)
Assuming they don’t have the keys. This is not a valid assumption so far as I’m aware.
It should most definitely be a valid assumption.
The keys shouldn’t be on or go through a server anywhere, that would be an absolute joke.
What makes you think that private keys are being sent anywhere? This app uses a slightly modified version of the Signal protocol (because of course it does), as they describe here, section 27, page 90. Only public keys should ever leave your device, otherwise no amount of showing the code would make it secure. That’s the whole point.
Again, with the client code you should be able to tell that the keys are generated there and not sent anywhere.
As I said, with any app, just because they publish some server code does not mean that that’s what they’re running on their server - for security you have to be sure that the app is sufficiently secure on its own. Even if they were running the exact public code that “didn’t save the keys” the server could harvest them from memory.