This is cool, but there is no LICENSE file putting this in DONT USE territory.
This has a license: https://github.com/skiptools/skipstone but it vendors the other repo according to the readme? I am super confused about how this would work.
Huh, what does one have to do to comply with the LGPL on iOS anyways?
I'm sort of surprised that only the largest plan ($5000/month) and not the ($10/$500/$2,500/month plans) includes a license that doesn't involve figuring that nonsense out.
You aren't shipping the LGPL part of Skip with your app. It's a build tool.
You don't need to worry about using (L)GPL build tools to produce non-GPL apps.
You have nothing to worry about with this license unless you are forking the Skip build tool itself. You can't ship this build tool to the App Store anyway, it's a build tool and not code you run inside your app.
As I understand the LGPL - not a lawyer - you have to somehow enable all your users to relink your application against a different version of Skip (4.d.0 since 4.d.1 isn't possible on iOS). This means that your application must do something like include a copy of all the files that went into linking the application and convey that to the users along with your application, with scripts to build the application against a different version of Skip...
I can't imagine the app store would be particularly amused with this during app review... though I've never tried.
The license file linked provides an exception for 4d and 4e:
As a special exception to the GNU Lesser General Public License version 3 ("LGPL3"), the copyright holders of this Library give you permission to convey to a third party a Combined Work that links statically or dynamically to this Library without providing any Minimal Corresponding Source or Minimal Application Code as set out in 4d or providing the installation information set out in section 4e, provided that you comply with the other provisions of LGPL3 and provided that you meet, for the Application the terms and conditions of the license(s) which apply to the Application.
I just setup mine today, and I am not sure I recommend it.
I went from a 40" to a 52", and I'm just moving my head waaay too much and my shoulders hurt. It is curved, but very little imo, it's almost like it's flat. I'm going to try it for a week before making the call on whether to return it.
I feel like this needs a workflow where you do work in the middle and use the fringes for other applications that you rarely look at. Otherwise you're moving your head waaay too much and squinting a bunch.
Based on personal experience, I think the upper bound for comfortably useful size at normal sitting distances is probably about 32", and even then I think there'd be better returns on adding vertical pixels to a ~27" monitor. A modern equivalent to the old 16:10 30" 2560x1600 monitors (ideally 2x scaling 5120x3200) would be great for example, but one could also imagine a 4:3 or 5:4 monitor with the same width (~23.5") as current 16:9 27" monitors.
I'm still rocking a couple of 30 inch dell 2560x1600 monitors. They're about the perfect size and not dealing with scaling in Linux is nice. I'd pay a ton of money for a modern equivalent.
Same! My employer offered a choice of 32-inch and 40-inch monitors. I “upgraded” from 32 to 40 but I regretted it. I just don’t make use of the extra horizontal space effectively.
That was my issue with multiple monitors years ago - I'd be cranking my neck over too often (looking at logs, etc). I vastly prefer an ultrawide where I can put logs / monitors on the side flexibly.
I have a 34 inch now, and feel like I could use more space - but it's nice to know there's an upper bound. Do you feel like there's still room to go beyond 40, or is that the sweet spot?
3x27” high-PPI displays in portrait orientation is the winner and no one does it
The center display is always actually centered. The short edge of a high-PPI 27” screen is wide enough for actual normal width browser or IDE usage, but now you get much more vertical real estate on that window.
Not nearly as much neck movement as an ultra wide and since the entire array is pretty square, the neck movement is way more balanced.
I went from 34"(3440x1440) which felt a little bit small to 38"(3840x1600) and it is nearly perfect. I can have my main window in the middle and 2 or 4 smaller windows (logs, chat, youtube, etc) on the side.
The only thing I want now is double pixel density.
When I owned a 40" monitor, I had to get a deeper desk and sit pretty far from it. Even then, I couldn't game on it, because games shove the HUD and minimal into the corners, and they were too far to the side to keep an eye on.
Can't picture a 52" being usable as a PC monitor, really.
I sometimes think that my 40" is too much because the extra space just ends up hosting distracting junk like Slack.
I also have a mild take that large screens make screen real estate cheap so less thought goes into user interface design. There's plenty of room just stick the widget anywhere!
It'd be pretty interesting to compare how much the amount of information one can cram onto their ~27" screen has changed between 2005 and 2025, with the comparison points between between a Mac running OS X 10.6 and a Mac running macOS 26, which I think is a particularly salient and apples-to-apples comparison since Apple was selling 30" 2560x1600 displays back then, which are close cousins to modern 27" 2560x1440 displays.
My gut feeling is that the difference would be around 30-40%. Information density of the UI of OS X 10.6 and contemporary software was much higher than today's tabletized "bouncy castle" style UI.
It would be interesting but I don't think that information density necessarily makes a good interface.
As a personal pet peeve example, developers love to cram a search bar (or browser tabs) into the top of the window. It's more dense but it's also harder to use and drag the window.
True. More accurately, it's a combination of high density, judicial allocation of whitespace, and layouts that have been thought through. The 2000s versions of OS X were better in those regards too, though.
Seconding this. I have one for my work desk, where (surprisingly enough) it made a lot of sense. The DPI isn't as big of an issue as people make it out to be if your workflow doesn't depend on high density, but the curvature definitely could benefit from being a bit tighter. You need a fairly deep desk or a keyboard tray if you don't want to be turning your head a bunch.
That being said, having this in combination with PowerToys FancyZones has been fantastic. At any given time, I'm usually running 1-4 main working windows plus Signal, Outlook, and an RSS reader. This gives me more than enough real estate to keep them all available at a moment's notice. I have roughly 40% of the screen real estate dedicated to Signal, Outlook, and my RSS client, with the interior 60% being hotkey-mapped to divide in different proportions. Compared to my old setup (one ultrawide plus two verticals) it's been awesome.
I've been using a 49" monitor for almost four years.
I have the center window taking half of the screen, and on the sides I have my email, messaging clients and other things I like to monitor from time to time.
Kinda like this: [ | | ]
I am on mac and I use an app called Magnet to manage the windows. I will only change this setup for a larger monitor.
You'll get used to it. I have 3 24 inch monitors side by side. Center one is usually the editor, right one documentation or more editors, left one browsers with info.
Maybe it's a head turner vs eye mover thing. It's a lot less fatiguing moving eyes, which might not be option for glass wearers. I sit 2 feet away from my 50 inch OLED and moving eyes is much less work than windows management. Otherwise it is very workflow dependant, i.e. working on visuals or schematic diagrams.
Yeah, I'm on a Lenovo 5k2k 40" UW and it's never occurred to me to want something wider. Though I will admit I definitely noticed the loss of total real estate vs my old 3x 27" setup.
> You are a Gen Z App, You are Pandu,you are helping a user spark conversations with a new user, you are not cringe and you are not too forward, be human-like. You generate 1 short, trendy, and fun conversation starter. It should be under 100 characters and should not be unfinished. It should be tailored to the user's vibe or profile info. Keep it casual or playful and really gen z use slangs and emojis. No Quotation mark
There's u-boot patches with video and USB support. There's also work in progress UEFI, which already boots Linux fine.
Once these are finalized, you won't need UART. You'll be selecting where to boot or configuring the firmware in the same manner you do on an IBM PC clone.
It will of course still be possible to use the simpler, reliable UART.
To add to that (edit no longer possible): Selecting boot from SD card via the boot DIP switches is already possible.
This ROM won't even initialize RAM. All it does is fetch next stage from SPI, eMMC, SD or UART (xmodem), into SRAM (CPU cache used as RAM) and jump to it.
What's not possible is USB, as the (non-updatable) early boot code ROM within the SoC is trivial and obviously has no USB support.
Thus UEFI or u-boot are needed for USB or NVME boot.
Prometheus can also receive remote write requests, however, we recommend only writing metrics scraped by another Prometheus or the agent. The datamodel still has a few things that expect the metrics to have been scraped.
Anyone with experience scaling Prometheus horizontally ? We are reaching the limits of our instance, memory and cpu wise, and I’m yet to choose between scaling it myself with sharding or using thanos/Victoria/cortex.
If you want to query across the whole data set, use one of the other things.
Prometheus has a "federation" option but there's not been any active work on it for years.
It's basically the definition of Thanos - take a bunch of Prometheus and query across them. Plus long-term storage in S3.
VictoriaMetrics, Cortex, Mimir are centralised data stores that accept data from multiple Prometheus, but you could also run headless agents scraping and sending the data.
Note if you are on a version before 2.44, try upgrading. Prometheus slimmed down a bit.
I've beenthrough this song and dance. Did months-long PoCs (with live data, running next to the then-production Prometheus deployment) of Thanos, Cortex and Victoria Metrics.
VM won hands down on pretty much all counts. It's easy and simple to operate and monitor, it scales really well and you can plan around how you want to partition and scale each component, it's incredibly cheap to run as performace is superior to the others, even when backed by spinning HDDs vs the other solutions on SSDs.
It's especially easy to operate on Kubernetes using their CRDs and operators.
I am not associated with Victoria Metrics in any way, just a happy user and sysadmin who ran it for a few years.
VictoriaMetrics was recommended to me by a contractor and I've been very happy with it as well. It does have an option to push in metrics, which I intend to use with transient environments like CI jobs and the like, though I haven't gotten there yet.
Yep, we used to use that in a few places. CI jobs, batch processes, etc. Prometheus has PushGateway which we also used before migrating to VM, but it had certain drawbacks (can't recall exactly what, sorry) that the new solution didn't.
Operational nightmare, expensive to run, various parts of the entirely-too-many moving pieces it contains broke all the time and the performance was…unimpressive.
I‘ve heard that some people manage to run this thing successfully, and power to them, but I want nothing more to do with it.
Just save yourself the pain and use Victoria Metrics. Added benefit: you get an implementation of a rate function that’s actually correct.
I have been running Mimir reasonably well. When it comes to performance, what exactly did you find unimpressive? Interested to know any pitfalls or pain points you have encountered so far?
Unfortunately, yes. The OTel Collector has plans to implement a WAL for the OTLP exporter and when it does, you should be resilient to upstream temporarily having issues.
What are the merits of the prometheus approach versus one where events/metrics and their original timestamps can be preserved, stored (during temporary outages), then forwarded to backends when connectivity is reestablished?
This is cool, but there is no LICENSE file putting this in DONT USE territory.
This has a license: https://github.com/skiptools/skipstone but it vendors the other repo according to the readme? I am super confused about how this would work.