When learning a bit of Japanese to me it was quite obvious that the script was most likely intentionally left (or made) complicated. By using a normal alphabet that actually fits to the structure of Japanese language it would be just another ordinary language to learn.
Using a syllables-script for an ending grammar just doesn't make sense. Using 2 syllable scripts is just strange.
It most likely helped the leaders there to stay in control. Without native Japanese translators foreigners are unable to get very far.
WE USE THREE SYLLABLE SCRIPTS ALL THE TIME IN ENGLISH. lowercase and uppercase are two divergent evolutions of the roman alphabet that got shoved together for no particular reason. ๐ข๐ฏ๐ฅ ๐ช๐ต๐ข๐ญ๐ช๐ค๐ด ๐ช๐ด ๐ข ๐ต๐ฉ๐ช๐ณ๐ฅ ๐ฅ๐ฆ๐ด๐ค๐ฆ๐ฏ๐ฅ๐ข๐ฏ๐ต ๐ฐ๐ง ๐ต๐ฉ๐ฆ ๐ข๐ญ๐ฑ๐ฉ๐ข๐ฃ๐ฆ๐ต ๐ต๐ฉ๐ข๐ต ๐ธ๐ฆ ๐ฎ๐ฆ๐ณ๐จ๐ฆ๐ฅ ๐ช๐ฏ ๐ต๐ฐ๐ฐ. ๐ฌ๐ ๐๐๐๐๐ ๐๐ ๐๐๐'๐ ๐๐๐ ๐ญ๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐๐.
Uppercase and lowercase are not spelt identically any more than katakana and hiragana are.
q-Q, e-E, r-R, a-A, b-B and most of the rest are all completely different characters. Even m and M are not as straightforwardly connected as someone who learned a latin-character based language as their first language would think.
Thinking that e or a look anything like E or A is entirely down to your first language using the Latin alphabet (I'm making an assumption but I can't think of any other way they would look similar).
I've done language conversation exchanges with Japanese English learners and the characters really are completely different to someone learning them for the first time.
ใธ is virtually identical to ใ visually. Most of the katakana and hiragana pairs derive from the same kanji and share visual similarities, especially if youโre familiar with Chinese calligraphy. So what?
Well, I haven't made that up myself - I got the idea from linguistics books - and from people that lived there for a long time. And those linguists were very clear that the language came first and then syllable script was bolted on.
I guess with the current state of even simple static website/apps we seem to often need performance. I hadn't even thought about compatibility of the web browser. Might any WebKit based browser (with user-agent overrides) be close enough?
I recently experimented with a ram disk. Practically it didn't change anything.
OS-caching seems to already be clever enough and once the OS has figured out that some directories are important anything in there seemed to get done in RAM anyway.
A RAM-disk will make this less black box and more deterministic regarding guaranteed access times, but in daily use the RAM-disk just didn't make a difference.
I'm on Windows. Other OS may differ. I experimented a lot with load times - and compared HDD, SSD, RAM-Disk.
I wrote a small program that loads everything from a big code-repository into RAM. The first time HDD and SSD and RAM-Disk make a big difference, when reading files a second time the lag of HDD (50s?) almost disappeared completely. Caching kicked in.
The RAM-Disk has less initial lag, but also it has to be filled first, so instead of moving everything to RAM-Disk just touching everything so the OS-Caching kicks in is just faster and more convenient.
Great news... Compile time or a clean build? I use similar tooling (older version... :-)). And I'm starting to evaluate what's going on with those new versions.
Comparing compile times is done rarely. I sometimes tried to google some comparisons, but didn't find much.
For a typical 1,5 MLOC project my clean build times are sitting around 40 seconds.
I think that's pretty good, but I don't think I have a way to improve it much. Without writing a compiler from scratch I think 10 seconds are out of reach for me...
Do u think your company will consider cloud compiler if this is x5 faster? There issues such as latency, delta synchronizing, and tool versioning. But its doable.
If u build such a service (pun intended!), the issues are probably cost and security.
Improvements in build-time are very valuable to me. But I'm not sure if a cloud would help me much. Synching can easily eat up 10s. Also my tools are very unique (including a parser that I wrote myself).
To be faster than local a cloud would have to be massively overclocked - and I can already reach good results locally.
For actual improvements I need a massive improvement in compiler technology (last time I looked there were just too many single-threaded bottlenecks in my build process). Nothing a cloud can solve for me.
Security is one issue - but hackers will most likely only get confused when they try to understand what is going on - also costs I usually don't like...
I don't know about China, but the whole science around COVID seems to have a really strong cultural component that before was totally unfamiliar to me.
When looking at some German Epidemiologist blog I found something like: "Next thing on the list is to proof that government measures worked"
I would have expected something like: "I'm looking at data - and want to find out what helps"
I once took one of these online-tests for spotting deep fakes.
By marking anything that has reflections in the eyes as fake I got only 1 answer wrong (the real guy there looked really strange).