Hacker Newsnew | past | comments | ask | show | jobs | submit | pzo's commentslogin

I don't like vercel design, its just huge list of abstract skill name and you have to click on every one to even have a clue what something does. Such a bad design IMHO.

Design of https://www.skillcreator.ai/explore for me it's more useful. At least I can search by category, framework, language and I also see much more information what some skill does at a glance. I don't know why vercel really wanted to do it completely black and white - colors used and done with a taste gives useful context and information.


That site loads 1 skill at a time on the explore page on my iphone, mobile safari

slop?


Theranos were also hyping a lot and trying to build some stuff. There is some threshold (to be decided where) after which something is more of a fraud than a hype.

Also these days stock market doesn't have much relation to real state of economy - it's in many ways a casino.


Not sure who determines the threshold, he certainly goes to court more than your average person, but these are not start ups, they are large companies under a lot of scrutiny. I don't think the comparison is valid

>Not sure who determines the threshold

The SEC.

>he certainly goes to court more than your average person

Yes because he sues a lot of entities for silly things such as some advertisers declining to buy ads that display next to pro-hitler posts, or news outlets for posting unaltered screenshots of a social media site he acquired.


right now this infrastructure processing is Mastercard/Visa which they have high fee and stripe have high minimal fee. There are many local infrastructure in Asia (like QRCode payments) that don't have such big fees or are even free. High minimal fee it's mostly visa/mastercard/stripe greed/incompetence and regulation requirements/risk.

Finally UI that is not so ugly. Now I'm only wondering if I somehow can setup that I can share the same LLM models between LM Studio and llamabarn/Ollama (so that I don't have to waste storage on duplicated models).

Ollama made the wonderful choice of trying to replicate Docker registries/layers for the model weights, so of course the models you download with Ollama cannot be easily reused with other tooling.

Compared to models downloaded with LM Studio, which are just the directories + the weights as made, you just point llama.cpp/$tool-of-choice and it works.


Its was a good read until at the end ...

> For the remainder of 2026, Microsoft is cooking up a big one: replacing more and more native apps with React Native. But don't let the name fool you, there's nothing "native" about it. These are projects designed to be easily ported across any machine and architecture, because underneath it all, it's just JavaScript. And each one spawns its own Chromium process, gobbling up your RAM so you can enjoy the privilege of opening the Settings app.

I'm a little tired of people junking on react native when they have no clue what they talked about (And I'm not even react native dev but iOS dev). React Native doesn't spawn any chromium process. This is not electron. React Native doesn't even use v8 engine. All UI views and widgets are native. Platform SDK is native, Yoga Layout is native C++ and even faster than UIKit layout. Majority of RN code is Native - go have a look at github at languages section. JS is only 19% of codebase, everything else is C++, Obj-C, Obj-C++, Kotlin, Java.

The problem AFAIK with startup being laggy was making http requests to downloads those ads.


> React Native doesn't even use v8 engine

Are you saying you would use React Native with a language other than JS?


Engines other than v8 exist. React Native uses Hermes or JavaScriptCore (Apple/Safari). [0]

Other engines include SpiderMonkey (Mozilla/Firefox) [1] and QuickJS [2]

[0]: https://reactnative.dev/docs/javascript-environment

[1]: https://spidermonkey.dev/

[2]: https://bellard.org/quickjs/


I'm a little tired of "hey I installed Linux!" posts. Ok, you installed Linux. Great. Wow! Now what, wanna show a screenshot of your desktop with an anime girl as the baackground and neofetch in a terminal window?

This comment seems fully detached from both the main linked article and the comment it replies to.

Did you read it? It's exactly what I posted, but 100x longer, and with memes.

I did read it, but it sounds like you didn't. It had quite a lot to say about the reason for the switch, the challenges involved, and alternative software to meet real needs. Eye candy was not the focus at all.

neofetch is outdated. fastfetch is the new neofetch now.

for me this latest macOS is buggy as well. AirDrop stopped working and I cannot airdrop from my phone to my macbook that is 20cm away, connected to same fast WiFi and sometimes even wired to my macbook via usb-c.

I tried then to workaround and use apple Image Capture app to copy some files - always the latest picture is not available in the app. So now I have to do like cavemen to share some stuff to my email and download file in the email. And share button sometime fails as well...


Try opening the share menu on your phone, then opening the full airdrop menu, and airdropping to your laptop. Works for me when airdropping directly to my laptop in the share menu does not.

> It supports the conversion of PyTorch, TF/TF Lite, ONNX

I think it doesn't support TF Lite (on TF SavedModels) and ONNX haven't been supported anymore for quite a while sadly

As for the repo I like it, actually yesterday and today had to convert few models and that would be useful. I see you use Swift instead of coremltools so thats great - for benchmarking should have less overhead.

Some ideas:

1) Would love to have this also as agent skill

2) Would be good if we could parse xcode performance report file and print in human readable format (to pass to AI) - gemini pro was struggling for a while to figure out the json format.


The JSON output makes it easy to wrap as a tool for frameworks like LangGraph, but I would be worried about the latency. Since it is a CLI, you are likely reloading the whole model for every invocation. That overhead is significant compared to a persistent service where the model stays loaded in memory.

Its so bad that these day such posts are flagged in HN and you cannot have any discussion about it. Feels like censorship and HN not doing anything about it to be at least transparent about providing some stats how many times something got flagged and how flagging algorithm works so we at least have some confidence that it's not abused by bots


we probably need a european HN equivalent


Hackeur News


Hacked News


I seem to remember having the ability to vouch for submissions like I can comments. But it's never an option for these. Why?


You can vouch when submission is marked "[flagged] [dead]", and this one is just "[flagged]".


project also have to be paid off financially. We have been there before - startup used to go fast and break things so that once MVP is validated they slow down and fix things or even rewrite to new tech/architecture. No you can validate idea even faster with AI. And probably there is a lot of code that you write for one time or throw away internal tools etc.


you don't have to build new city from scratch just do city planning and dedicate one outside part of city to high density, high tower residential buildings then let me people decide where they want to live.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: