Rounding that to 1 error per 30 days per 256M, for 16G of RAM that would translate to 1 error roughly every half a day. I do not believe that at all, having done memory testing runs for much longer on much larger amounts of RAM. I've seen the error counters on servers with ECC RAM, which remain at 0 for many months; and when they start increasing, it's because something is failing and needs replaced. In my experience RAM failures are much rarer than for HDDs and SSDs.
an ordinary part that mapped into 8KB at location >6000->7FFF (the ROM) and another part, that normally held Graphics Programming Language bytecode, mapped into a completely separate “Graphics ROM” address space from >6000->F7FF (the “GROM”).
This reminds me of the NES, which has separate PRG and CHR address spaces, the latter being exclusively for the PPU to display its graphics.
The TI-99/4 has 4k of scratchpad RAM accessible to the CPU. The CPU architecture had no general-purpose registers and had basically only 3 onboad registsrs: the status register, the program counter, and the workspace pointer. The WP pointer to a 32-byte range of RAM that worked like a set of 16 16-bit registers and a subroutine call was a matter of storing the current PC and WP and loading a new pair (a whole new set of registers). The 4k RAM was the equivalent of "the stack" on a modern x86 or Arm CPU.
Programs were stored as bytecode in memory addressable only by the graphics processors (note: not a GPU). Executing a program meant the CPU would write the GROM address to a register on the graphics chips followed by a request to fetch and would then read the byte from another register. It then had to interpret that byte through the ROM.
There were true separate address spaces, not different ranges in the same flat address space like on the NES. The CPU could not address the GROM directly.
I had the Minimem cart that had a line-by-line assembler that let me dump the ROMS. Many hours were spent hand-disassembling the OS for my TI-99/4A.
We could try to find this loading using static analysis, but remember that I’m not comfortable reverse engineering this firmware, and I want to demonstrate a more dynamic approach.
Perhaps this is a "two types of people" situation, but I would absolutely not do that; once you dump the flash you can analyse and inspect it carefully at your leisure as it is otherwise inert, but messing around with the device itself presents a very real risk of accidentally bricking it.
If you read the article, the OP points out that static analysis for this platform is not supported in Ghidra.
Also, reading between the lines, I think it's safe to assume the author did dump the flash.
> Using the strings command on the firmware dump reveals a lot of interesting details about the webserver itself, but nothing obvious hints us to the password.
The author is referring to limitations in analysing banking:
> Ghidra supports the 8051 architecture but not code banking.
Usually in these ISAs an I/O port or a register sets the bank number, so any processor module should be able to resolve concrete banked references. But you still need to know what that register holds in various code paths, which are likely dynamically computing those values.
No tooling can give this out-of-the-box, as it relies on knowing the concrete initial state of the system (i.e. memory and register contents), and knowing what to return when hooking into I/O accesses.
Once these are known, we can leverage the built-in pcode emulator and run it with this state. It seems nowadays Ghidra has some built-in support for Z3, but I personally never used it, so I'm not sure how viable it is for symbolic execution. Regardless, with either approach, we would now have concrete banked code references being resolved, and could script some auto annotation of the disassembly with these references. These would be equivalent to what the author gathered from the logic analyzer trace.
A pure static analysis approach seems to suggest one would manually brute-force through all possible bank numbers at any given code path, which I guess is only viable if you have the time for that.
More or less, built on top of it with added udp/icmp.
When writing server and client a lot of time is consumed by additional features, not on implementing the spec itself. For instance, in order to be truly stealthy we have to make sure that it looks *exactly* like Chromium on the outside, and then maintain this similarity as Chromium changes TLS implementation from version to version. Or here’s another example: on the server-side we need to have an anti-probing protection to make it harder to detect what the server does.
We support both H2 and H3 and this is necessary. QUIC is not bad, but there are places where it either does not work at all or works too slow.
And one more thing, even though the code and spec is only published now, we’ve been using TrustTunnel for a long time, started before CONNECT_UDP became a thing.
We’re considering switching to it though (or having an option to use it) just to make the server compatible with more clients.
Ah, so you resolve domains before to apply the routes to the profile, I see. As per the spec, network extensions are not allowed to reroute traffic outside the tunnel, destinations set in the tunnel network settings must be routed inside the tunnel. This means that users have to know their domains upfront, the app cannot do this dynamically, if only to comply with apple rules.
Actually, no, we don't resolve them. We scan the incoming ClientHello before making a decision on where to route the connections. If the connection should be bypassed we make a connection by ourselves and proxy traffic. Implementing it that way requires having a TCP stack right in the client.
Wanting to be able to use anybody's machine is very strange, agreed.
From a support/IT perspective though, the closer everybody's machine is, the easier the job is.
The last software shop I worked at, we had a default set of tools and configs. It was a known happy path. You were allowed to adventure off of that path, but you were mostly on your own.
Devcontainers[1] or some similar technology are a must. Use whatever specific IDE you want, but the development environment itself should be identical across everyone on the team.
No more "works on my computer" issues. The environment is always identical.
> Wanting to be able to use anybody's machine is very strange, agreed.
Very useful if people are struggling to create reliable repro steps that work for me - I can simply debug in situ on their machine. Also useful if a coworker is struggling to figure something out, and wants a second set of eyes on something that's driving them batty - I can simply do that without needing to ramp up on an unfamiliar toolset. Ever debugged a codegen issue that you couldn't repro, that turned out to be a compiler bug, that you didn't see because you (and the build servers) were on a different version? I have. There are ways to e.g. configure Visual Studio's updater to install the same version for the entire studio, which would've eliminated some of the "works on my machine" dance, but it's a headache. When a coworker shows me a cool non-default thing they've added a key binding for? I'll ask what key(s) they've bound it to if they didn't share it, so we share the same muscle memory.
It's quite common if you work in a team of engineers, or in a large company with many engineers.
Having consistent machine and OS and app configurations enables better (lower cost, higher reliability) scripting and tooling solutions in things like repos and infrastructure.
Not unlike consistency in language and compiler choices.
Having a consistent setup makes it easier for your organization's IT team to support you, troubleshoot issues, etc. It also makes it easier for you to collaborate with other members of your team, or even other teams. If your coworker Fred comes to you asking for help on how to refactor something, for instance, it will go much more easily if you're running the same IDE with the same refactoring tools.
Organizations establish and enforce standards for a reason.
Several years ago I remember making something that could be considered a custom "distro" of macOS that would be VM-oriented and as minimal as it could be for CI purposes, by starting with the recovery/installer partition and adding what I needed while deleting what I didn't. Not surprisingly, there was next-to-no precedent of such that I could find, and my biggest source of information was the Hackintosh community. Nonetheless it was not too difficult, if tedious, to do so, and the final disk image size I arrived at was less than 1GB. In general the macOS community is, to put it bluntly, mostly computer-illiterate non-power-users who will either advocate against you or otherwise have no idea what you're talking about. In contrast there's a HUGE amount of existing information on modding Windows, and of course Linux sits at the other extreme.
Isn't the stock one open-source? Even if it isn't, Java is really trivial to decompile reversibly, and removing features is much easier than adding. Take advantage of Android the way it was meant to be taken advantage of.