Hacker Newsnew | past | comments | ask | show | jobs | submit | ritter2a's commentslogin

Very interesting! Quite amusing that adding milk seems to be an unquestionable truth while adding sugar is considered destroying the flavour and adding pepper (which is not uncommon in India) appears to be unthinkable.

But I find it most surprising that the detailed rules say nothing about how long to steep the tea.


What about adding "echo sleep 0.01 >> ~/.bashrc" to their .bashrc (or whichever shell config file is used)?


The real evil comes from putting it in ~/.bash_logout or equivalent so it doesn't get as much visibility as a bashrc might.


I don't get it. What's so bad/annoying about sleeping for 10 milliseconds whenever a new shell is opened? I don't think anyone would notice. "sleep 1000", on the other hand...

EDIT: I misunderstood it as executing

   echo sleep 0.01 >> ~/.bashrc
once instead of adding that line to .bashrc – even though that's exactly what you wrote...


It appends another sleep each time you start a new shell, so it subtly slows down over time.


wouldn't it do so exponentially?

The first time it does it once, the second time the statement is there twice so it does it twice, then 4, 8, 16, etc. ?


No, because;

    echo sleep 0.01 >> ~/.bashrc
adds this to .bashrc;

    sleep 0.01
It doesn't add itself, so the sleep tie grows linear by 0.01sec each time a shell is opened.


No, the entire statement isn't added each time, only the sleep portion.


the third time the statement is there three times ..


The idea is to add the 'echo ..' to the .bashrc, not just run the command once.


Ah, now I get it. Thanks! That's evil, indeed.


Wouldn't it add another ten Ms every time you opened a new terminal session?


yeah its windows 95 simulator :)


At least for some form of mechanical clocks: There is an app for that :)

E.g., for Android, the "Watch Accuracy Meter", which can be found in the Play Store or the APK source of your choice, uses the phone's microphone to measure the frequency of mechanical watches.


The Atmos does not have a seconds hand. All models except two do not have minute indications.

I believe that is going to difficult measuring accuracy. You can approximate something if you compare it over a long period of time, perhaps at 12 o'clock, or as much 12 o'clock as you can tell by inspecting the hands alignment.


I would say that the benefits you get from a habit of using SMT solvers depends a lot on the kind of problems you are working on.

If your problem is rather small and self-contained, you can really win a lot with these solvers. E.g. I used an ILP solver to fairly distribute tasks to a group of people based on a heuristic of familiarity between each person and the tasks. That's only a few constraints in the solver that saved me a lot of manual or suboptimal coding work. Similarly, it was easy enough to quickly check whether two graphs satisfy a custom condition that is close to (but not quite) isomorphism that I wanted to give a try.

If you are thinking of bigger, more complex tasks, the efficiency gains you get might vary more. If your problem produces many large solver queries, the chances are good that you will wait a long time for results. Without much practice, adjusting/augmenting your specification (and specifically in the case of ILP solvers: tuning the solver parameters) to reduce your solving times is a very non-trivial task. Either way, the solver probably won't bring you to 80% of the execution speed of a custom solution: I once sped up a task by 100-200x by replacing a Gurobi LP with a dumb, theoretically exponential, but easy to optimize custom implementation (the solver was however helpful for testing that the implementation was correct).

Lastly, in my experience, once a constraint system/model reaches a certain complexity, it is a lot more difficult to debug than code in a programming language. There are useful techniques and tricks for debugging solver models, but the tooling and the sequential execution of real code make debugging of traditional code less of a head ache.


Big disadvantage of custom implementations is the fact that they are custom. If you change the constraints / requirements then suddenly they do not work.

Tailored implementations are great for problems that you have to solve frequently with no changes to the structure. In reality, requirements frequently change breaking many of previously valid assumptions for efficiently filtering the feasible space.


I tried to use this to ease the front end work load of students in a compiler project (building a C compiler) for a University course, so that the project could be focused on the more interesting middle and back end parts of the compiler. However, reported bugs in the C grammar that saw no activity at all [1] made this impossible. From this small sample of experiences, I was left with the impression that Tree Sitter is great for things like syntax highlighting, where wrong results are annoying but not dramatic, but not so suitable for tools that need a really correct syntax tree.

--- [1] https://github.com/tree-sitter/tree-sitter-c/issues/51


Hi there! You're right that the C grammar in particular is one that could use some love. C is not one of the languages that we're syntax highlighting with tree-sitter yet, nor is it one of the languages that we support Code Navigation for. That means that my team has had to prioritize their work in other places, and no community members have stepped up to take over or help out with maintenance of the C grammar. Not a satisfying answer, I realize, but an honest one.


> Right, but GPT-2 was the name of the particular ML architecture they were studying the properties of; not the name of any specific model trained on that architecture.

That sounds like it would have been a reasonable choice for naming their research, but isn't the abbreviation "GPT" short for "Generative Pre-trained Transformer"? Seems like they very specifically refer to the pre-trained model, which I would also take from the GPT-2 paper's abstract: "Our largest model, GPT-2, is a 1.5B parameter Transformer[...]" [1]

--- [1] https://cdn.openai.com/better-language-models/language_model...


Regarding interface stability: Indeed, the textual representation is not stable, things like added types in the representation of some instructions can happen when upgrading to a new version. However, to be entirely honest, in the last few years of updating LLVM-based research tools to newer LLVM versions, changes in the C++ API that required me to (sometimes just slightly) change my code happened a lot more often than changes in the textual representation...


I would claim that the benefits of 'mostly functions' strongly depend on the task at hand.

For the field of compilers, I can for example see value in making program analyses pure functions that just compute information about the program and separate them from the program transformations that use this information to (impurely) manipulate code. This makes the analyses more reusable and probably makes reasoning about correctness easier.

For other tasks in the compiler, pure functions can be a pain. My favorite anecdote for this is that of a group of students in a compiler's course who insisted on writing the project (a compiler for a subset of C) in Haskell and who, when discussing their implementation in the final code review, cited a recent paper [1] that describes how you can attach type information to an abstract syntax tree (which is an obvious no-brainer in the imperative world).

---

[1] http://www.jucs.org/jucs_23_1/trees_that_grow/jucs_23_01_004...


An ad hoc solution is also a no-brainer in Haskell. They didn't need to read a paper to solve this issue, they did because they wanted the fanciest solution that is extensible in all dimensions.


AnyDSL (https://anydsl.github.io/) might be worth mentioning, it is a framework for creating domain specific languages using partial evaluation to give powerful abstractions to the user while still producing high-performance code for different target platforms. On the website, you can find that they already have DSLs for ray tracing and image stencil codes as well as works for bioinformatics.


I consider it a fun little game to guess for each mention of "Kafka" in a HN title, whether it means the author or the software. Definitely not trivial, since this time, I would have guessed "Kafka in Pieces" to be an introduction to the software, component by component.


If there is something out there which is this introduction, would love to read it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: