Hacker Newsnew | past | comments | ask | show | jobs | submit | cyphar's commentslogin

Unfortunately, they still use libolm[1] for e2ee which is deprecated[2] and has known security issues[3]. The maintainers appear to not be interested in switching to the newer Rust-based library. Matrix argue that the timing channel attacks are not possible over a network, but the history of timing channel attacks argues that this very few protocols are this fortunate (most people thought timing attacks against TLS were impossible too, until someone bothered to attack them).

[1]: https://github.com/Nheko-Reborn/nheko/issues/1786 [2]: https://matrix.org/blog/2024/08/libolm-deprecation/ [3]: https://soatok.blog/2024/08/14/security-issues-in-matrixs-ol...


Yeah, although Matrix is theoretically about being an open protocol supporting a range of clients and servers, in practice it winds up being heavily skewed to just Element/synapse. I think this is partly because there is still too much churn in the protocol. A decent amount of that churn is improving things, but it still makes it too hard for average-joe devs to keep up with what's hip. I don't think there's much chance of a real menu of feature-rich clients until the protocol becomes stable. Unfortunately, I don't foresee that happening soon.

AFAIK none of that churn is around functionality that actually matters that much to end users, however. Certainly nothing as important as “working clients”.

Holy hell!


There's also at least one case[1] where the locked door itself stopped someone from stopping the crash (the CA had flying experience and Mentor Pilot[2] showed that even someone with no flying experience could be instructed to autoland if they know how to use the radio. If the CA had entered earlier they might've been able to land, though most of the passengers would've still died unfortunately.)

One of the more reasonable theories for MH370 is similar to the Germanwings case. Pilots can refuse access even if the person outside knows the access codes for the cockpit doors.

Unfortunately (as with everything else), even obvious improvements have potential downsides.

[1]: https://en.m.wikipedia.org/wiki/Helios_Airways_Flight_522 [2]: https://www.youtube.com/watch?v=YaOvtL6qYpc


1> At 11:49, flight attendant Andreas Prodromou entered the cockpit and sat down in the captain's seat, having remained conscious by using a portable oxygen supply.


Yes, however it's not clear how they entered and why it took them so long (they entered a few minutes before the plane crashed due to fuel exhaustion -- the left engine shut down 50 seconds after he was seen entering the cockpit). It stands to reason that if the door was unlocked they may have been able to enter much earlier, which could've resulted in a very different outcome.

That's why I said "If the CA had entered earlier".


In short, yes. See 4(e):

> Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version.

And from GPLv2:

> For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.


Perhaps this is splitting hairs (although that's necessary for licenses), but would it not be sufficient to be able to install the binary somewhere, but not on every machine? E.g., if I tell a customer the modified binary can be installed on some emulator, but not their phone, why is that prohibited?

In this case, I guess the installation information would have to be "use an emulator and do XYZ".


Not directly related to the phenomenon in the article, Numberphile has a video[1] that goes into why pi has a surprising amount of regularity despite being transcendental.

[1]: https://www.youtube.com/watch?v=sj8Sg8qnjOg


This is addressed in the second paragraph of the article. The iPhone 6S had an OS update in October 2023 (iOS 15.8) which included a security fix for a different issue. The Chromium security issue was fixed in June 2023.


主(nushi) has wrapped back around to being more on the polite side, though I get the impression it's used more by older people. There's also phrases like 持ち主(mochi-nushi -- person who is a holder of something) that have ossified the pronoun so it probably won't go away entirely for a while.

I think the reason for the rotation of pronouns is because people start using them sarcastically which means it's no longer seen as respectful, and so new pronouns become necessary.


uBlock and YouTube have been regularly updating to counteract each other's counteractions for the past few weeks (at least, in my branch of the A/B rollout). Force refreshing the filters has usually fixed the issue for me. There's also a pinned thread on the subreddit with troubleshooting issues.

I rarely use YouTube on my computer these days anyway, I mainly use NewPipe.


Translating the Paul Graham video to Croatian works surprisingly well.

(As a non-native speaker) several bits of the Japanese transcription were just Japanese-sounding noise, and the prosody was completely all over the place in the bits that were actually Japanese. The dub does get across most of the things being talked about, but I wouldn't consider it an acceptable dub (even for most of the bits where it is actually in Japanese).

I wonder if reason is that the dubbing works better between more similar languages and struggles when dealing with disparate languages. I don't speak Chinese or Korean, but it would be interesting to see how well their dubs are.

Can you do another video into Japanese, to see if it's just an issue with the Paul Graham video? Also, how about translating from another language to English?


Here's another example:

Steve Jobs on death | Walter Isaacson and Lex Fridman (EN->JP): https://www.youtube.com/watch?v=6YiJLDhcWSI

Maybe it does better on longer videos, but as someone who is fairly well-versed in Japanese this one was not a good result either. There is also a bit of wonkiness in this one, where in the dub Lex says some of Walter's lines and vice-versa, and at one point both of them repeat verbatim the same exact translated phrase which doesn't correspond to anything said in the original.


Good idea, added a Putin video translated from Russian: https://www.youtube.com/watch?v=YSkjQJcqaFo


I love the slow whispered "Thaaank Yooooouuuu" at :10.


Paperback uses the AEAD-and-SSS-the-key construction, but that's more to do with the properties such a scheme gives you (it allows more flexibility with how you distribute the secret -- you can keep the main document with a lawyer and distribute the shards so even if the shard holders betray you they don't have access to the original document). You also want to have protections against fake-shard attacks (which allow an attacker with a real shard to withhold the secret from others during recovery and then keep the recovery secret to themselves), and so you need to make use of traditional cryptography anyway by signing shards to detect fakes.

To answer your question though, it primarily depends on the SSS construction and how big the quorum (number of shards needed for reconstruction, not the total number of shards created) is.

Most constructions use Galios Fields and thus chunk the input and produce a new polynomial for each chunk -- the vast majority use GF(2^8) which means its per-byte. Paperback uses GF(2^32) to get 4-byte chunks so that the x-value used is more collision-resistant. I suspect for large documents GF(2^32) will be about 4x faster because Galois Field operations are very fast (which is why most tools use them over prime fields) but using 4-byte chunks reduces the number of operations needed by 4x. I considered going for GF(2^64) to get 8-byte chunks but doing so requires 128-bit integers in some operations. You can also process the chunks in parallel if needed. While very large documents would take longer, the recovery operation is pretty efficient and recovery time should scale linearly. The main scaling issue with SSS is if you want to increase the quorum size -- most of the algorithms are at least quadratic with respect to the quorum size if not worse.

If you implement it the way described in the original paper (using a prime field) then it mostly depends on the efficiency of the bignum library you are using -- though you probably want a bignum library that supports doing operations in a finite field because doing large multiplications and then calculating the modulus afterwards is a lot of wasted work and memory. I suspect Galios Fields are far more efficient at any size.

For paperback's SSS implementation, I've written some preliminary benchmarks and it seems you can easily do secret recoveries at a rate of at least 75 KiB/s depending on the quorum size (32-person quorums are around 75 KiB/s while 5-person quorums can run as fast as 900 KiB/s). So, for reasonably-sized quorums you can easily have several-megabyte sized documents with sub-minute recovery times. Of course, doing AEAD and sharding the key is still much faster for larger documents (ChaCha20-Poly1305 can run at ~2GB/s on my machine according to "openssl speed"). But it's not necessary.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: