Hacker Newsnew | past | comments | ask | show | jobs | submit | virtualbluesky's commentslogin

Acting in public is hyperlocal - your behaviour affects those around you and gives those affected right of reply, if they have the courage to take it.

Publishing your actions on the Internet is a little different. If people were affected by the action, they are affected (likely unknowingly) by the publication too - and the audience that you grant right of reply has at best an ideological horse in the race, not true skin in the game. And not much courage is required to engage with an opposing position.

So "living publicly" on the internet leaves a permanent door open to ideological conflict, mob behaviour, and creates a disconnect between action and reaction - in both time and space.

Kinda alien for a monkey brain to wrap banana powered neurons around.


Do you have suggestions for those less informed about projects that are pushing the envelope on desktop UX?


It's the heat map of the error surface of the equation... Fairly well understood as a concept in the land of optimization and gradient descent.

Interesting, what's being visualized there is actually a failure mode for an unidentifiable equation - the valley where the error is zero and therefore all solutions are acceptable. Introduce noise into the measurements of error and that valley being too flat causes odd behaviour


It is possible to have both a crispy base and liquid yolk.


I just find nothing palatable about the crispy edge. If it happens when I cook an egg I cut it off. It has no flavor beyond the oil it was cooked in and all the mouthfeel of an orange peel left in the sun for three weeks.


Ad hominem may require a human on the receiving end, no?


If a parrot squawks, "1,345 multiplied by 785 equals 1,055,825", you would be logically and factually incorrect to say 'Well, that's wrong because how would a bird know".

The historical meaning of the word 'hominem' isn't crucial to the universal logical principle of 'ad hominem'. If xenoorganisms beneath the ice-sheets of Titan are dismissing each other's ideas out of hand, they too may be committing this fallacy. The fallacy is the rejection of an argument based on its source rather than its content.


Another way to look at it is by analogy. You pick up a cup, the cup warms your hand uncomfortably, so you put it down.

You and the cup are objects, and physically send messages as you interact. That leads to changes in the physical world as each actor decides what to do with the incoming information, by physics or by conscious action.

So far so good. Except software is just information, and so the software version of that interaction includes the "person put hot cup down on table" event. That interests somebody, so they rapidly express their displeasure and rush to put a coaster underneath...

And that is valid a model of computing. Direct messaging between interacting objects, a stream of events of the produced changes, and actors that consume that stream for things and optionally chose to initiate a new interaction


Importing a common https://developer.mozilla.org/en-US/docs/Web/API/CSSStyleShe... CSSStyleSheet and adding it inside the web component constructor might help?


Problem Exists Between Keyboard And Chair


Is this motivated reasoning from the perspective of an OS vendor? It seems like intermediating the user's intent using AI has the same hazards as intermediating the internet through a single search provider... i.e. it'll happen, but will tend towards benefiting larger interests, leaving the experience a little less rich than before.


Of course it's motivated. Make something seem useful so you can sell it to people and/or give it to them for "free" and monetize their data.


There's quite a few takeaways that can be had without fully understanding the ah.. esoteria.

1. Gradient descent is path-dependent and doesn't forget the initial conditions. Intuitively reasonable - the method can only make local decisions, and figures out 'correct' by looking at the size of its steps. There's no 'right answer' to discover, and each initial condition follows a subtly different path to 'slow enough'...

because...

2. With enough simplification the path taken by each optimization process can be modeled using a matrix (their covariance matrix, K) with defined properties. This acts as a curvature of the mathematical space, and has some side-effects like being able to use eigen-magic to justify why the optimization process locks some parameters in place quickly, but others take a long time to settle.

which is fine, but doesn't help explain why wild over-fitting doesn't plague high-dimensional models (would you even notice if it did?). Enter implicit regularization, stage left. And mostly passing me by on the way in, but:

3. Because they decided to use random noise to generate the functions they combined to solve their optimization problem there is an additional layer of interpretation that they put on the properties of the aforementioned matrix that imply the result will only use each constituent function 'as necessary' (i.e. regularized, rather than wildly amplifying pairs of coefficients)

And then something something baysian, which I'm happy to admit I'm not across


Thanks that is a brilliant explainer!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: