Hacker Newsnew | past | comments | ask | show | jobs | submit | RossBencina's commentslogin

> exponential functions remain (scaled) exponential when passed through such operations.

See also: eigenvalue, differential operator, diagonalisation, modal analysis


Pretty sure in the USA you can patent mathematics if it is an integral part of the realisation of a physical system.* There is a book "Math you Can't Use" that discusses this.

* not a legal definition, IANAL.


> Pretty sure in the USA you can patent mathematics if it is an integral part of the realisation of a physical system.

Yes, that's true. In that example, you're not patenting mathematics, you're patenting a specific application, which can be patented. In my reading I see that mathematics per se is an abstract intellectual concept, thus not patentable (reference: https://ghbintellect.com/can-you-patent-a-formula/).

There is plenty of case law in modern times where the distinction between an abstract mathematical idea, and an application of that idea, were the issues under discussion.

An obligatory XKCD reference: https://xkcd.com/435/

And IANAL also.


I would think you could only patent a particular usage of it.


Moreover, The Unreasonable Effectiveness of Linear, Orthogonal Change of Basis.



To be more precise, when working with sampled data with uniform sample rate you use the Discrete Time Fourier Transform (DTFT), not the Fourier Transform!. None the less, you still end up with an approximate spectrum which is the signal spectrum convolved with the window function's spectrum.

In my view the Fourier Transform is still useful in the real world. For example you can use it to analytically derive the spectrum of a given window.

But I think the parent is hinting at wavelet basis.


> Honestly, just pick up the Art of Electronics

I got this advice in 1998. I have the book. I found it useful for the "art" part. It got me through the projects that I was working on at the time, but personally it didn't help me with the fundamentals. Paraphrasing what has been said on this site many times in the past: AoE is a great first book in practical electronics if you already have an undergraduate degree in physics. I showed my brother AoE when he was building guitar pedals and he couldn't make sense of it and said it was obviously assuming things that he didn't know (he had no high-school science background).

There are a lot of potential and/or assumed pre-requistites even for basic electronics: high school physics, first-year calculus, maybe a differential equations course, certainly familiarity with complex numbers. As I understand it EEs take vector calculus and classical electromagnetism, that's a long road for self-study. For that reason it's hard to give general advice about where to begin.

For someone starting out I think the first things to study are DC and then AC analysis of passive circuits (networks of resistors, capacitors, inductors), starting with networks of resistors. Ohms Law, what current and voltage actually mean, some basic introduction to the physics passive components. This is the basics, and I don't see AoE getting anyone over this hump. This could be learnt in many ways, electronics technicians and amateur radio people know this stuff -- there are no doubt courses outside university both on line and in person. If we're talking books, get a second hand copy of Grob's "Basic Electronics." Once that's covered you can move on to semiconductors. I can recommend Malvino's "Electronic Principles," but this book won't teach you about resistors, capacitors and inductors. After that I think the Art of Electronics would be approachable. And also more specialised topics like digital design or operational amplifier circuits.

A book that usually gets a mention is Paul Scherz "Practical Electronics for Inventors." I got that book later, I personally found it a bit overwhelming with the mixture of really basic practical stuff combined with more advanced circuit theory, but it's no doubt popular for a reason.

Another standard recommendation is to buy one ARRL Handbook from each decade (I have 1988), the older ones have less advanced (hence more accessible) material. But reading the "Electronics Fundamentals" chapter is no substitute for Grob and Malvino.


Seconding an old ARRL handbook.


Alex Forencich has been live streaming rebuilding Corundum starting a few weeks ago: https://www.youtube.com/@AlexForencich/streams

As I understand it Taxi is where new development is happening: https://github.com/fpganinja/taxi


This is correct, and the result of those streams has been released as corundum-proto here: https://github.com/fpganinja/taxi/tree/master/src/cndm_proto . Note that this simplified design is intended for educational purposes only, the "production" variants will be much more capable (corundum-micro, corundum-lite, and corundum-ng).

In general, symbolic execution will consider all code paths. If it can't (or doesn't want to) prove that the condition is always true or false it will check that the code is correct in two cases: (1) true branch taken, (2) false branch taken.


I understand how this works in general. I had static analyzers at Uni, I know lattice theory and all this - I am just wondering how Xr0 handles it.


> we don't even use static analysis and validators for c or C++

There is some use, how much I don't know. I guess it should be established best practice by now. Also run test suites with valgrind.

Historically many of the C/C++ static analyzers were proprietary. I haven't checked lately but I think Coverity was (is?) free for open source projects.


The part about task initiation induced stress -> flight or fight -> distraction/relief-seeking resonated with me. I hadn't noticed that before. The small steps bit reminds me of BJ Fogg's "brush one tooth."

One common failure mode of "do the smallest/easiest thing first" that the article didn't address was that sometimes it's so easy to "buy the running shoes" that you end up with a house full of "easy first steps." I think a better approach is to aim to eliminate unnecessary complexity in moving towards the goal. You can do this by aiming for the smallest, easiest, and simplest first step that simultaneously maximises progress towards the goal. e.g. "I want to make a stand to hold my XYZ." Bad first step: Buy a 3D printer. Good first step: Improvise something out of cardboard.


Ha--totally agree about the 'house full of easy first steps'. I have a few.

But I think it all still applies; the key is to keep taking small steps toward the thing, not just 'keep taking small steps'. You look at a successful small step and (like I wrote) ask 'what's the next step?' that will build on it.


Love BJ Fogg's Tiny Habits!

2025 was the first time I have been able to implement and maintain a series of routines for the entire year (still going strong), and the concept of starting tiny was a key epiphany for me. I wrote about my experiences with it recently on my blog[0], but the point you make about good first steps is a great one.

A phrase I heard some time back that has stuck with me is "don't buy something hoping to be someone." In other words, don't buy running shoes hoping to become a runner.

In my personal experience, a good first step is the smallest version of doing the thing you ultimately want to be doing. "Brush one tooth" is a great example. Doing one push-up is another. For running, maybe just getting dressed, walking outside, and doing some stretching. The idea is that it's the stuff you would have to do anyways if you were going to do a more robust/thorough version of the thing you're trying to ultimately do. Buying shoes, on the other hand, is just purchasing more stuff.

[0] https://onebadbit.com/posts/2025/12/year-in-review/


And hunting for the perfect first step product becomes a dopamine chasing activity itself.


> I kinda don’t like PTP. Too complicated and requires specialized hardware.

In my view the specialised hardware is just a way to get more accurate transmission and arrival timestamps. That's useful whether or not you use PTP.

> My mental model is that you form a connected graph of clocks and this allows you to convert arbitrary timestamps from any clock to any clock. This is a lossy conversion that has jitter and can change with time.

This sounds like the "peer to peer" equivalent to PTP. It would require every node to maintain state about it's estimate (skew, slew, variance) of every other clock. I like the concept, but obviously it adds complexity to end-stations beyond what PTP requires (i.e. increases the hardware cost of embedded implementations). Such a system would also need to model the network topology, or control routing (as PTP does), because packets traversing different routes to the same host will experience different delay and jitter statistics.

> TicSync is cool

I hadn't seen this before, but I have implemented similar convex-hull based methods for clock recovery. I agree this is obviously a good approach. Thanks for sharing.


> This sounds like the "peer to peer" equivalent to PTP. It would require every node to maintain state about it's estimate (skew, slew, variance) of every other clock.

Well, it requires having the conversion function for each edge in the traversed path. And such function needs to exist only at the location(s) performing the conversion.

> obviously it adds complexity to end-stations beyond what PTP requires

If you have PTP and it works then stick with it. If you’re trying to timesync a network of wearable devices then you don’t have PTP stamping hardware.

> because packets traversing different routes

Fair callout. It’s probably a more useful model for less internty use cases. Of which there are many!

For example when trying to timesync a collection of different sensors on different devices/microcontrollers.

Roboticists like CanBus and Ethercat. But even that is kinda overkill imho. TicSync can get you tens of microseconds of precision in user space.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: