> Instead of running large runtimes locally, it acts as a lightweight agent client and delegates reasoning to cloud LLM APIs (GLM/GPT/Claude), while keeping orchestration local.
I thought that's what OpenClaw already is -- it can use a local LLM if you have one, but doesn't have to. If it's intrinsically heavy that's only because it's Javascript running in node.js.
I tried the summit of Mt Ruapehu here in NZ and got 358.8 km to Mt Owen. Not bad as I was expecting Tapuae-o-Uenuku which is a little shorter at 342 km.
One advantage in NZ is that on a nice day you actually have a good chance of seeing it.
Oh ... clicking on Mt Owen doesn't return the favour ... or the other nearest peaks. But Culliford Hill does show a return back to Ruapehu, 355.4 km. Clicking on Tapuae-o-Uenuku also, as expected, gives a line to Ruapehu: 342.3km.
Mt Cook is high, but has too many other high peaks near it.
Mt Taranaki is isolated, but doesn't turn up any very long distances.
I don't expect any other candidates in NZ.
Update: actual and accidental photo of Tapuae-o-Uenuku from Ruapehu (342 km), seven months ago.
And, as pointed out in a comment, also Mount Alarm 2.5 km further.
What is the longest in North America? Or Europe proper -- not Elbrus (which I've not been to but have been close enough to see, from several places e.g. from a house in Lermontov (~94 km only), summit of Beshtau (93 km), Dombai ski field (~63 km), somewhere on A157 (~50km).
Wow, glad you had fun exploring. It suddenly made me think of a little feature that I'm not sure we made the best job of exposing. In the little trophy icon toggle on the right, there's the Top Ten list of views, then under those there's a little line that just says "In current viewport: 123km". Did you see that? Did it make sense? I implemented it, so of course I know that it's better than clicking all the points around a peak to find the longest view from a mountain summit. But maybe it's not obvious to other users? What I do is zoom in so that the viewport only contains the area of the summit (or indeed entire country for that matter) that I'm interested in, then I look at that "In current viewport:" line without having to click anything.
That gives a longest in NZ of 365.3 km from Ruapehu, skirting past close by Tapuae-o-Uenuku (in the Inland Kaikoura Range) to a point on the Seaward Kaikoura Range near the peak of Manakau. Clicking on the actual Manakau peak also gives 365.3 km back to Ruapehu.
I can't seem to find a peak to get a reverse path back to Mt Ranier. Everything I try gets stuck in the Olympic Peninsular. (I was there once ... 1998 or so ... a place called Hurricane Ridge IIRC)
One thing to note about finding reverse lines, is that they're not truly mathematically identical because the observer always has a height of 1.65m and the destination is always some point at the surface, therefore 0.0m. It doesn't always make a difference, but it sometimes can.
The thing about the observer height that I always try to remember is that features really close to the observer can make an exponential difference. Like imagine how simply putting hand in front of your eyes can suddenly make the whole world disappear. So in theory, it is possible that a mere change of a few centimeters in the height of the observer could affect a similarly dramatic change in the view.
Not a geologist, but interesting that many of these sites are close to equator. Suppose that's where mountains are higher because tectonic plates are more active?
Not a geologist either but an astronomer. Never heard that tectonic activity has any association with proximity to equator.
Mountains can rise higher near equator because you have the least gravity there. The whole Earth bulges along the equator. But I don't think it's measurable.
While Everest (8849m) is the highest point above Sea Level, Chimborazo (6267m) in Ecuador is further from the centre of the Earth (about 2000 metres further), due to the equatorial bulge. It's very measurable.
Well that's not what the claim and clarification was about. The question was: can a mountain rise higher in the equator as compared to higher latitudes?
It is not about highest point from centre of Earth. That's is related to equatorial bulge but irrelevant to the discussion.
It's also interesting because the radius of curvature is smaller, meaning the distance to the horizon is shorter north south, and a lot of these views are north south. So the increase in mountain height more than overcomes the other effect!
The earth is an oblate spheroid to an approximation. It's not that they're not symmetric, but at the equator the north south axis has higher rates of curvature than anywhere else (but the east west has somewhat lower rates because of the larger circumference due to the bulge).
So that large lines of sight are near the equator on a north south axis (or symmetrically south north) is crazy because the high rates of curvature in that direction at those latitudes should give the shortest distance to the horizon on earth, making those lines of sight even that much more impressive!
I've been using a K3 for a few weeks now. It's quite pleasant, and if I use all 16 cores (8x X100 and 8x A100) then it builds a Linux kernel almost 3x faster than my one year old Milk-V Megrez and almost 5x faster than K1.
Even using just the "AI" A100 cores is faster than the Megrez!
It's also great that it's now faster than a recent high end x86 with a lot of cores running QEMU.
The X100 cores are derived from THead's 2019 OpenC910. The A100 cores are derived from SpacemiT's own X60 cores in their K1/M1 SoC.
Note that the all-cores K3 result is running a distccd on each cluster, which adds quite a bit of overhead compared to a simple `make` on local cores. All the same it shaves 2.5 minutes off. In theory, doing Amdahl calculation on the X100 and A100 times, it might be possible to get close to 11m50s with a more efficient means of using heterogenous cores, but distcc was easy to do.
Or, you could just run independent things (e.g. different builds) on each set of 8 cores.
Or maybe there's a lower overhead way to use distcc, or something else that is set up to distribute work to more than one set of resources.
I've written a small (~40 instructions) statically linked pure asm program [1] that switches the process to the A100 cores [2] then EXECs the rest of the arguments.
So you can just type something like:
ai bash
or
ai gcc -O primes.c -o primes
or
ai make -j8
... and that command (and any children) run safely on the A100 cores instead of the X100 cores.
It would be great if the upstream Linux kernel got official nicely-worked-out support for heterogenous cores -- more and more RISC-V is going to be like this, but Intel would also benefit with e.g. some cores having AVX-512 and some not, or even I recall one Arm (Samsung) SoC with big.LITTLE cores with different cache block sizes.
But in the meantime, this is workable and useful.
[1] so there is no possibility of the dynamic linker, C start code, or libc using the V extension and putting the process into a state dangerous to migrate to the different-VLEN cores.
[2] by getting the PID and writing it to to `/proc/set_ai_thread`
No major projects in RISC-V, just little bits and pieces here and there e.g. some contributions to the ISA manual, a little bigger (but still minor) contributions to the V and B extensions. Published the first working "check it out and build" LLVM repo for RISC-V, based on merging some out of date patches from Alex. Preserved a gcc/binutils toolchain for RVV draft 0.7.1 which for some reason the tag was removed from in the main riscv-gnu-toolchain repo (the commits still exist there, but no way to know what is a good point. I think a lot of people used that for C906/C910 until XTHeadVector got merged into GCC 14 with different mnemonics (`th.` prefix on all the instructions). Some contributions to the Samsung port of DotNET to RISC-V.
Just idk as an independent person if I see something easy the big players are ignoring then I try to fill the gap. Quite often that comes down to spending a few hours developing some example code to post on Reddit or my github. But watch this space ... I'm thinking of maybe trying to do that full time with community sponsorship at a buck or five a month each (Github Sponsors, Patreon, Buy Me a Coffee, OF [1] etc ..)
Would that work? I don't know.
[1] I hear a lot of people already have credit cards set up there.
I think you have some potential opportunities here either in doing something in the spaces of technical education or podcasting/newsletter.
Could definitely imagine a weekly podcast where you cover the weeks risc-v developments and add some context from your experience and knowledge. Or a course targeted at getting new developers up to speed.
Either way the existing types of knowledge and work you do could work as marketing opportunities for those paid avenues.
> I have a couple of earlier RISC V systems that were advertised as nearly desktop performance
No one with any true knowledge of RISC-V would ever make such a claim. Know-nothing marketers might, I suppose, but why would you listen to them rather than to actual insiders?
The current newest RISC-V boards (Megrez and Titan and whatever the upcoming SpacemiT K3 ones are called) are solidly in mid-range Core 2 territory, especially K3 which has SMID/vectors which the other fast chips currently don't.
Older boards using JH7110, TH1520, K1 are closer to Pentium III or PowerPC G4 though with 4 or 8 cores instead of 1, but without an equivalent to the SSE or Altivec SIMD those old, or if they have it with near zero software using it.
Late this year is expected to see RISC-V products with performance in Skylake to Zen 2 performance levels, verging on M1 (M1 IPC but lower MHz).
> they are much slower than similar priced arm systems
Irrelevant to the technology. They are competitive with similar µarch (five years older) Arm systems.
Price can never be competitive (assuming no deliberate loss-making) until production and sales volumes are similar. Which can't happen until performance matches current Arm and X86 performance -- which RISC-V is converging with quite quickly, certainly by 2030.
Atlantis should come in with similar performance per clock to an Apple M1, but probably at around 2.5 GHz instead of 3.2 GHz.
That's close enough to be unnoticeable for most people for most uses, at least on the CPU side. It'll come down more to how well things such as GPUs and video CODECs are supported.
There are plenty of people using M1 or similar e.g. Zen 2 machines today with no inclination to upgrade. They are more than good enough.
It really doesn't matter much. The Titan and K3 are Core 2 performance, the K1 and JH7110 are more like Pentium III.
A 1.5 GHz Ascalon is still going to be ... I don't know ... Skylake level? More than enough for a usable modern desktop machine and a huge leap over even machines we'll start to have delivered 3 or 4 months from now.
Hopefully it will be affordable. As in Megrez or Titan prices, not Pioneer.
Single core performance is about what you say. But multi-core performance is much better. The K3 scores higher than a 2017 Macbook Air for multi-core on Geekbench 6.
And the K3 can take 32 GB of DDR5 and run a decent-sized LLM, which is not something you are doing on an a 5-10 year old laptop. In addition to the vector instructions, the built-in video codec acceleration and hypervisor stuff make for quite a modern feature-set.
The K3 is still too slow to be a desktop system for most people but there are some of us who would already be ok with it.
As for pricing, it is hard to find info. But it seems like around $200 may be possible for the Jupiter2.
Yes, I've been using a K3 for a few weeks now. It's quite pleasant, and if I use all 16 cores (8x X100 and 8x A100) then it builds a Linux kernel almost 3x faster than my one year old Milk-V Megrez and almost 5x faster than K1.
It's also great that it's now faster than a recent high end x86 with a lot of cores running QEMU.
Note that the all-cores K3 result is running a distccd on each cluster, which adds quite a bit of overhead compared to a simple `make` on local cores. All the same it shaves 2.5 minutes off. In theory, doing Amdahl calculation on the X100 and A100 times, it might be possible to get close to 11m50s with a more efficient means of using heterogenous cores, but distcc was easy to do.
RISC-V SBC single-core performance has been better than x86+QEMU since the VisionFive 2 (or HiFive Unmatched) but we didn't have enough cores unless you spent $2500 for a Pioneer.
I thought that's what OpenClaw already is -- it can use a local LLM if you have one, but doesn't have to. If it's intrinsically heavy that's only because it's Javascript running in node.js.
reply