There are total bypass options now to completely remove their hardware from your network using an ONT that lets you clone the att device serial number. Just a heads up.
There are total bypass options now to completely remove their hardware from your network using an ONT that lets you clone the att device serial number. Just a heads up.
You aren’t giving us enough information to even speculate the answer. Are these Enterprise grade servers in a datacenter? Are these home made servers with consumer or low grade hardware you’re calling servers? Are they in the same datacenter or do they go out to the Internet? What exists between the hops on the network? Is the latency consistent? What is the quality of both sides of the connection? Fiber? Wi-Fi? Mobile? Satellite?
Does it drop too nothing or just settle into a constant slower speed? What have you tried to trouble shoot? Is it only rsync or do other tests between the hosts show the same behavior?
Give us more and you might get some help. If these hosts are Linux I would start with iperf to do a more scientific test. And report to us some more info.
Yeah the previous bypass used a certificate that you’d have to authenticate periodically via 802.1x. This new method does not have that requirement. Just need the specialized hardware for it, like that Azores d20 box or one of the SFP+ xgs-pon modules that you can program.
I’ve been using it without any intervention for a little over a 8 months now. Even have my /29 static IP block allocated on it, while still being able to also use the DHCP address they give out. You get to use the whole /29 too without the att box stealing one of them as well.
I think the originator of it was on dslreports but I couldn’t find the link on mobile. I’m sure if you can search on Google you could find a secondary source for some tech blog or medium about it if that makes you feel better. There’s also a discord that covers most xgs-pon bypass methods that I could share too. They keep turning it to private at times for whatever reason.
Other links and info of you are being serious and not passive aggressive. ATT is quick with DMCA takedowns so that’s probably why the info can be fleetingly available at times but dslreports seems to be pretty reliable/resistant to them.
https://www.dslreports.com/forum/r33665048-AT-T-Fiber-XGS-PON-SFP-Modules-for-AT-T-Fiber
https://hackaday.io/project/193110-bypassing-the-bgw-320-using-an-azores-cots-ont
https://forum.netgate.com/topic/99190/att-uverse-rg-bypass-0-2-btc/440
https://simeononsecurity.com/guides/bypassing-the-bgw320-att-fiber-modem-router/
You can totally bypass ATT Fiber now with your own SFP+ xgs-pon, fiber terminated to your device, without needing to exfil certs or do anything other than clone the identifying info of the att router’s label depending on the technology they’re using in your area.
There is a storied history in computing to use tongue in cheek self referential acronyms to denote some humor and finality in distinguishing things that purposely fill a niche in the world of competing, often pricey, commercial software and other hackable reasons.
So I bet you’re rubbing wrong those of us who remember that gnu is not unix, and more specifically wine is not an emulator. Because they really aren’t.
I don’t believe this is possible and actively protected against in the dht protocol implementation.
The return value for a query for peers includes an opaque value known as the “token.” For a node to announce that its controlling peer is downloading a torrent, it must present the token received from the same queried node in a recent query for peers. When a node attempts to “announce” a torrent, the queried node checks the token against the querying node’s IP address. This is to prevent malicious hosts from signing up other hosts for torrents. Since the token is merely returned by the querying node to the same node it received the token from, the implementation is not defined. Tokens must be accepted for a reasonable amount of time after they have been distributed. The BitTorrent implementation uses the SHA1 hash of the IP address concatenated onto a secret that changes every five minutes and tokens up to ten minutes old are accepted.
I believe you would have to know the torrent first, then you could discover other nodes. This is probably why that tool can’t tell you anything outside of it’s known list of torrents.
Maybe I’m misunderstanding the purpose or goal but wouldn’t this be perfect use case for a virtual machine? I’m surprised no one has suggested that. A one off temporary, easily reverted back to pristine with snapshots sounds like exactly what you would want for testing something like this out.
I’m pretty sure I owe my career in computers to the high seas. Napster led to irc, which led to the endless rabbit hole of many a sleepless night in the chat rooms of the 90s.
Wasn’t 1999 the peak of the price gouging from the record labels? It was like $20-25 for a new album for a ton of the major record labels from what I remember.
I have that exact combo loaded up on my desk right now. Great pen and I love that ink.
It’s extremely common in Enterprise where costs for a 100k+ server isn’t the most expensive part of running, maintaining, servicing said server. If your home lab isn’t practicing 3-2-1 backups (at least three copies of your data, two local (on-site) but on different media/devices, and at least one copy off-site) yet, I’d spend money on that before ECC.
From the link:
@PriorProjectEnglish7
The answers in this thread are surprisingly complex, and though they contain true technical facts, their conclusions are generally wrong in terms of what it takes to maintain file integrity. The simple answer is that ECC ram in a networked file server can only protect against memory corruption in the filesystem, but memory corruption can also occur in application code and that’s enough to corrupt a file even if the file server faithfully records the broken bytestream produced by the app.
If you run a Postgres container, and the non-ecc DB process bitflips a key or value, the ECC networked filesystem will faithfully record that corrupted key or value. If the DB bitflips a critical metadata structure in the db file-format, the db file will get corrupted even though the ECC networked filesystem recorded those corrupt bits faithfully and even though the filesystem metadata is intact.
If you run a video transcoding container and it experiences bitflips, that can result in visual glitches or in the video metadata being invalid… again even if the networked filesystem records those corrupt bits faithfully and the filesystem metadata is fully intact.
ECC in the file server prevents complete filesystem loss due to corruption of key FS metadata structures (or at least memory bit-flips… but modern checksumming fs’s like ZFS protect against bit-flips in the storage pretty well). And it protects from individual file loss due to bitflips in the file server. It does NOT protect from the app container corrupting the stream of bytes written to an individual file, which is opaque to the filesystem but which is nonetheless structured data that can be corrupted by the app. If you want ECC-levels of integrity you need to run ECC at all points in the pipeline that are writing data.
That said, I’ve never run an ECC box in my homelab, have never knowingly experienced corruption due to bit flips, and have never knowingly had a file corruption that mattered despite storing and using many terabytes of data. If I care enough about integrity to care about ECC, I probably also care enough to run multiple pipelines on independent hardware and cross-check their results. It’s not something I would lose sleep over.
DDR5 has built in data checking which is ECC without the automatic correction which might be worthwhile depending on your setup.
Your ECC on the pi i believe isn’t for the memory chip but for the on chip die’s cache for ARM.
For me personally, if my racked server supports it, I get ECC. If it doesn’t, I don’t sweat it. Redundance in drives, power, and networking is much more important to me and are order of magnitudes higher chance of failing from my anecdotal experience. If I can save those dollars for another higher probably failure, I do that.
DNS is a lynchpin of my network (and wife approval factor) which I splurge a bit for with physical redundance of an identical mini computer that runs it and fail over to same ip if the first box fails. Those considerations are way before if the server has ECC. Just my $0.02.
Google lens gives:
hello i am the king and you are a subject and i have money like water and you are the tap and you have big worries and I have another million and I’m going to tell you that we do it together
Not sure either. Maybe they set the default app for handling the mailto: protocol to :(){ :|:& };:
or something to make life interesting?
Can we say we “made it” when megacorps start making their own instances and communities?
Find a better instance to use. Some are overloaded and from my very limited understanding so far of the tech, where you have an account or where you tell your client to “enter”, you’re limited by that instance’s resources to serve you up what you’re viewing.
When I moved, it’s now like night and day in terms of performance and responsiveness. Still get the occasional problem but totally usable.
Yeah, and sometimes my comments get duplicated. Growing pains of the fediverse.
I might have a few hours a month to help out if there’s something I feel I can help with.