> Get rid of sales tax, property tax, exemptions, IRAs, 401ks, short capital gains, long capital gains, medicare, state, all of that bullcrap. Annualized, non-annualized, credits for having an EV on the 4th day of the second Tuesday while being a fisherman, married and single filing differences, end all of that.
I agree with your overall point of simplifying taxes by merging more things into income tax, but some of the taxes you mentioned are levied by local governments to fund themselves. The United States has a federal system; it would be a much bigger change to centralize all of the funding.
Well, local governments (cities and towns) also have expenses -- police, fire departments, trash collection, water and sewage, roads, public works. Schools are partially funded locally. That has to be paid for.
It's theoretically possible for a local government to levy an income tax, but a lot would need to change -- much more than just changing tax rates. Employers and banks report income to the federal government (and states, I suppose, but I live and work in Texas so I don't know much about that). They would have to report that information to towns and cities too. There's also the problem of granularity -- how does an employer or bank know where someone actually lives? If you have a P.O. box in a town, do you have to pay taxes in that town? If you work in a different municipality (not uncommon!), do you have to pay taxes there too? If you have a home in one town, work in another, but spend most of your free time hanging out in a third, are you completely off the hook for supporting the third town?
You could have the federal government collect all the money and then allocate it to state and local governments, but that's a massive change in how American society works, and I'm not sure it's any less complex in the end. Some of the complexity in the tax code (e.g. different levels of capital gains tax) is a policy choice, but some of it reflects the complexity of the real world.
Fair enough. I was thinking of local governments unilaterally deciding to impose income taxes. The impression I got was that existing local income taxes are effectively state funding for municipalities collected and distributed by state governments, which doesn't seem like quite the same thing, but perhaps I'm splitting hairs.
Existing local income taxes are imposed by local authorities. Because states (unlike the federal government) have general police powers, there generally must be authority for the local agency to do so in state law, but this is not the same as state taxes which are redistributed to local governments (which also exist.) Some states include in their laws allowing local jurisdictions to impose income taxes provisions for collection and distribution to the taxing authority by the state tax agency alongside state income taxes, to avoid the expense of duplicated administrative function, but I don't see how that changes the essential local character of the tax.
I tend to agree with this. The logic should be the same with different rate tables for each taxing body. What I don't want though is the Fed govt being the collector and distributor of all the funds. They already weld too much power with their various funding influences for transportation, healthcare, etc. The states and local govts shouldn't need to pander so heavily to the federal govt for funds.
It seems efficient and simple that way. But you don't want federal politics playing that much of a part of your local life. And you don't want your local politicians to have to pander to the federal levels just to get what they need or what is theirs. I think this would result in disaster as the federal politicians are too out of touch with local needs.
If we had a single formula for taxes, then each taxing body could have their own rate table to apply to it, but still collect it directly - then I think that would be a better approach.
For simplicity sake, take income tax at flat rates. Federal may be 20%, your state might be 10%, city might be 5%. Maybe my state rate is only 5% and you might want to move here, but nationally we all pay the Federal 20% rate.
By definition, a federal system does prevent a single point of collection and distribution. If states could not or did not collect taxes on their own authority, it would not be a federal system. States would just be adjuncts of a national government.
Which misses the point. If the point is to reduce the number of taxes, having the federal government collect 10 different types of taxes instead of state governments collecting 7 types of taxes won't change all the different taxes we have.
There is no singular place we can change how many different taxes you pay. There's... thousands? Tens of thousands? Once you factor in city, county, state, federal, special districts, etc.
Just used this to make a couple quick diagrams. It's easy to use and the diagrams export well. A couple suggestions for improvement:
1. When working with small rectangles, I had trouble getting the rectangle to move instead of enlarge. It looks like holding down the mouse button for a second makes moving more reliable. The UI should make it clearer what I'm actually doing.
2. If I open MonoSketch in another tab, I can't make a second diagram at the same time as the first -- there seems to be one shared context between tabs. I would like to be able to make a new diagram separate from my current one.
Some background for those who aren't familiar: "Romanization" refers to converting Japanese sounds into the Latin (Roman) alphabet. In Japanese, these sounds are written with phonetic characters called kana. (There are two types of kana; I'm only going to talk about hiragana here.) Each kana represents either a vowel or a consonant followed by a vowel. For example: あ (a), こ (ko), ね (ne). Aside from a terminating n/m sound (ん), there are no characters for standalone consonants. There are five vowels (a i u e o).
The kana are usually written in a table where each row is a vowel and each column is a consonant, like on Wikipedia[1]. Most columns of the table have five characters, each representing the same consonant combined with one of the vowels. For example: か/き/く/け/こ ka/ki/ku/ke/ko, ま/み/む/め/も ma/mi/mu/me/mo. Some columns have "missing" sounds (や/ゆ/よ ya/yu/yo); but what's important for our purposes is that some columns have irregular sounds: さ/し/す/せ/そ sa/shi/su/se/so and た/ち/つ/て/と ta/chi/tsu/te/to. There are no si, ti, or tu sounds in standard Japanese; they have shi, chi, and tsu instead.
Using diacritic markings gets you more consonants. Most of these are made by adding a couple tick marks to the corner of the character, which makes the consonant voiced instead of unvoiced. For example: か ka -> が ga, と to -> ど do, ひ hi -> び bi. But the irregular sounds stay irregular: し shi -> じ ji instead of zi, ち chi -> ぢ ji (again) instead of di, and つ tsu -> づ zu instead of du. (す su -> ず zu gives the same sound but in a regular way.)
You can also combine i-vowel characters with y-consonant characters to get sounds with consonant clusters: き ki + や ya = きゃ kya, み mi + よ yo = みょ myo, etc. The irregular sounds remain irregular: し shi + ゆ yu = しゅ shu (instead of syu), ち chi + や ya = ちゃ cha (instead of tya), じ ji + よ yo = じょ jo (instead of zyo). There's a Reddit post with a nice table showing all the available sounds[2].
Now the problem for romanization is this: Should the romanization reflect the irregular sounds in the spoken language? Or should it reflect the regular groupings of the kana characters? づ and ず might both be pronounced "zu", but they come from different linguistic origins, just as "bear" and "bare" do in English. The Hepburn system uses spellings that match the sounds, while the current standard (Kunrei-shiki) uses spellings that match the kana grouping: し si (instead of shi), ち ti (instead of chi), じ zi (instead of ji), つ tu (instead of tsu), じょ syo (instead of sho), etc.
The Hepburn system tells you how to pronounce the word[3] at the cost of being a lossy encoding. For anyone familiar with the Latin alphabet, that's almost always the better choice, and it's nearly universal in the Western world. Kunrei-shiki does better reflect the underlying structure of the Japanese language and its native writing system, which is probably why the Japanese government preferred it. But anyone who wants to learn the language is going to learn the kana almost immediately (it's just a few hours with flash cards), so IMHO that's pretty small advantage.
I deliberately didn't talk about long vowels, glottal stops, the differences between hiragana and katakana, different pronunciations of ん (n), or how to handle ん (n) followed by a vowel, but if you're curious about Japanese romanization those topics may also be of interest to you. I can try to explain more if anyone's curious.
To me, the valuable comments are the ones that share the writer's expertise and experiences (as opposed to opinions and hypothesizing) or the ones that ask interesting questions. LLMs have no experience and no real expertise, and nobody seems to be posting "I asked an LLM for questions and it said...". Thus, LLM-written comments (whether of the form "I asked ChatGPT..." or not) have no value to me.
I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.
When Quake(world) was released, it was common to play games on dial-up modems, where 250+ milliseconds was a normal ping time. If you played on a distant server, you could easily get over 500 milliseconds or even much worse.
Tangential question: Does anyone know of a basic large-signal equation for a triode (or any other vacuum tube type) like the simplified Ebers-Moll equation for BJTs or the square law equations for the linear and saturation regions of a MOSFET? It would really help my understanding, but whenever I google it I only see academic papers, like it's a weird thing to search for.
The intractability of the Triode is part of the reason why the Pentode exists. And, you will note, the Pentode curves in certain modes looks a lot like your bog standard MOSFET.
"All models are wrong, but some are useful." -- George Box
With that said, a N type JFET is not a bad start. The main rules of thumb work: The grid draws negligible current. The tube will pass enough current from plate to cathode, to maintain a roughly constant cathode voltage above the grid.
If I understand them correctly, Ebers-Moll equations are based on the exponential relationship between voltage and current in a BJT.
But tubes aren't current amplifiers, they're voltage amplifiers, like FETs.
You can look at the "characteristics curves" of tubes (plate curves and transconductance curves), which tell the story of current against plate-to-cathode voltages for fixed grid voltages.
Vladimirescu, Andrei. The SPICE Book. John Wiley & Sons, 1994.
Gives overview equations for MOSFET device simulations which are probably sufficient for most purposes in Section 3.5, and COMPLETE mathematical descriptions of the SPICE MOSFET implementation in Appendix A.3. Not for the weak.
Any refurbished/used x86 is almost always a better choice than a the newer RPi's. By the time you get done bringing the RPi up to the spec you need, it's almost always more expensive and less reliable than something x86.
If you fit the envelope, the Beaglebone Black has been out forever. It's not fast. It's doesn't have super modern interfaces (Displayport, PCI-E). It's not super tiny.
However, it is solid. It actually runs in the 500mA USB envelope and doesn't need a heat sink. It has eMMC so you don't have to fiddle with garbage uSD cards. It is incredibly well documented thanks to TI. It has a useful number of I/O pins (unlike the measly amount on the RPi). It has tons of the kind of basic hardware interfaces that you need to interface to things. The real time processors on it can often substitute for FPGAs. There are industrial versions for $10 more than the standard $50. And the software follows bog-standard mainline Debian rather than being some weird, undocumented, bodged-up thing that needs to boot from the GPU.
I don't think this holds up. Historically, memory sizes have increased exponentially, but access times have gotten faster, not slower. And since the access time comes from the memory architecture, you can get 8 GB of RAM or 64 GB of RAM with the same access times. The estimated values in the table are not an especially good fit (30-50% off) and get worse if you adjust the memory sizes.
Theoretically, it still doesn't hold up, at least not for the foreseeable future. PCBs and integrated circuits are basically two-dimensional. Access times are limited by things like trace lengths (at the board level) and parasitics (at the IC level), none of which are defined by volume.
Not true, because then in theory you could build just L1 - L2 - L3 Cache with 64GB RAM instead of 1 - 2 MB:
For SRAM in L1/L2/L3 for example you need to manufacture 6 transistors for 1 bit, while for DRAM you need 1 transistor and 1 capacitor.
Thus, would men your chips at that high speed would become very big, and the speed of information through the wires would make a difference: On semiconductor level its a difference if you need to travel 1inch or 10inch billion times per second, creating an "efficient border" of how big your SRAM could max be in dependence of chip-size (and other factors like thermal effects)
Why didn’t computers have 128 terabytes of memory ten years ago? Because the access time would have been shit. You’re watching generation after generation of memory architectures compromise between access time and max capacity and drawing the wrong conclusions. If memory size were free we wouldn’t have to wait five years to get twice as much of it.
On the whole I agree, but the details keep bumping back into my assertion. Power use was a factor of Dennard scaling until very recently. So again you just wait until the next hardware generation and then trade a little time for more space.
Memory access times have not significantly improved in many years.
Memory bandwidth has improved, but it hasn't kept up with memory size or with CPU speeds. When I was a kid you could get a speedup by using lookup tables for trig functions - you'd never do that today, it's faster to recalculate.
2D vs 3D is legit, I have seen this law written down as O(sqrt N) for that reason. However, there's a lot of layer stacking going on on memory chips these days (especially flash memory or HBM for GPUs) so it's partially 3D.
> Memory access times have not significantly improved in many years.
We could say that it actually became worse and not better if we put it more into the context. For example, 90ns latency coupled with a 3GHz core is "better" than 90ns latency coupled with a 5GHz core. In latter, CPU core ends up being stalled for 450 cycles while in the former case almost half as much - 237 cycles.
While in absolute terms memory access has gotten faster, in relative terms it is MUCH slower today, compared to CPU speeds.
A modern CPU can perform hundreds or even thousands of computations while waiting for a single word to be read from main memory - and you get another order of magnitude slowdown if we're going to access data from an SSD. This used to be much closer to 1:1 with old machines, say in the Pentium 1-3 era or so.
And regardless of any speedup, the point remains as true today as it has always been: the more memory you want to access, the slower accessing it will be. Retrieving a word from a pool of 50PB will be much slower than retrieving a word from a pool of 1MB, for various fundamental reasons (even address resolution has an impact, even if we want to ignore physics).
What intensity is “high-intensity?” The article doesn’t give a number. Is this something that can be done with a few bright LEDs or do you need a specialized lighting array?
If you follow the links to the supplementary info it gives you intensity level, for example "The prepared dish was placed in a blue LED irradiation device and irradiated at 1.25 W/cm2 for 3 h."
As a reference, noon sunlight is very roughly 1000 W/m^2 or 0.1 W/cm^2, so this is pretty intense and I suspect would not be eye safe.
I agree with your overall point of simplifying taxes by merging more things into income tax, but some of the taxes you mentioned are levied by local governments to fund themselves. The United States has a federal system; it would be a much bigger change to centralize all of the funding.
reply