Hacker Newsnew | past | comments | ask | show | jobs | submit | tejuis's commentslogin

I've used Emacs for 30 years. In the very beginning swapped capslock and control. After 20 years started to have minor problems with the pinky. A lot of keyboard use. Did not like evil mode for several reasons. It meant a lot of configuration and relearning commands. Tried out god-mode for a short while, but had some sharp edges, and went back to normal. Retried god-mode with more effort to make it work for me and never looked back. At that point it took a long time to get everything to muscle memory, though. Maybe two months to feel ok and years to feel good.

I use "i" to get into editing mode (normal emacs mode) and control-backspace to get to god-mode. I mostly use searching for moving in the buffers. In god-mode it means pressing "s" and then typing the search text with enter marking the search text completion. Then I use "." to search forwards and "," to search backwards. If I want to search for the word(s) at point, I do "s" and "enter" and then "j" to grow the search word, word by word. When ready, again "." searches forwards and "," backwards.

The most common operations, like changing buffers and emacs windows, are under single key press. Everything is quite effortless.

The best properties for god-mode for me is that you can use the same key combinations as normally, just producing them in a sligthly different way, and that you can also do keychord edits in editing mode if that is convenient. You don't have to exit editing mode at all, if that helps.

The obvious compromise with modal editing is that you have to know in which mode you are in and that some of the useful commands are not available in the editing mode, and then you have to switch modes to get them. However, with god-mode related editing mode you always have all the keys available, and you only have to know the mode you are in (at least partly). I would never settle with evil mode, but god-mode works very well. A lot of benefits with moderate amount of downsides, therefore a win all in all.


BTW that was a very interesting interview... I kind of agree. However, in order to automate something, you need to know the subject. For example, in digital circuit design, we use these EDA tools, but on top of that we create a lot of custom tools to automate all kinds of phases of the process. You definitely have to know the field you're working on.


Same here, no problems with undefined behavior. Also, no memory issues either after done with code finalization using Valgrind.


“no memory issues in the tested state space”. That’s the only thing Valgrind can say. But it says nothing how a run with different input would behave, it just might segfault/leak/use after free/UB.


That is always the case in any platform. Just because something works on a Mac it will not necessarily work on a PC or vice-versa. If a language has multiple compilers, you also need to test in different compilers to make sure your code works there too. You're trying to make this as a C-only issue, when it is a general issue, maybe with different names.


Data is information about things and code is information about how to transform data. Every program transforms input data to output data. Code is interpreted by the computer. Data and code are encoded to bits for the digital computer.


That sounds enlightening to me.

Code is interpreted by the computer, Data is interpreted by Code !!!

But when we say "code is interpreted by computer" we are really saying "Code is interpreted by Code" right? Meaning the interpreter or compiler.

So how come this snake doesn't eat its own tail?

The answer, the MAGIC of computers, is that at the lowest level somehow Hardware is able to interpret Code .


My answer tried to be as generic as possible and leave out the "real world" and implementation details in general. There are fundamental limitations to what you can express in four sentences...

Compilation is "just" an intermediate step between code and using it in an interpreter. The interpreter can be pure hardware, mix of hardware and software, or you can do it on paper yourself (given time). These are implementation "details". Also, to me code is not a synomym with software, since I'm not referring to programmability, only to the transformation aspect. Programmability is an implementation detail. You may perform a transformation with hardware only, as mentioned above. The hardware itself could be described with code, e.g. in Verilog language. Manufacturing a silicon chip based on Verilog code is also an implementation detail.

In fact your CPU/PC is hardware and it is making transformation from input data to output data. In this case, besides the other input data, like keyboard input and files on disk, you may consider the binary code as part of the input data. This is where the confusion starts to happen. This is an implementation detail and should not be confused with the general notion of utility we want to have. Code is "only" the means to an end. We want to do stuff with the machine. We want to transform input data to output data, since that is essentially what we are after. For the goal of getting stuff done, data and code are separate things.

When saying "data is interpreted by code", it is partially correct. First of all data carries meaning, as in meaning for people (the data user). Data is encoded to some format (which is also data, but implementation data) and you could say that this encoding is "interpreted". However, the encoding is "just" an implementation detail which depends on the machine you are trying to use for the transformation. When the computer paints pixels on your monitor, it transfers data bits through display driver, through HDMI protocol (for example), to monitor itself. There are plenty of "interpreters" on that path. On the screen the pixels could represent the letter "A" and that could carry meaning for you (depending on the other stuff on the screen). This is why I stated that data is information about things. Encoded data is an implementation detail and varies between implementations, but the "true" data is essential and implementation independent. It carries meaning and utility value. Essential data is not interpreted by code. The user (a person) interpretes the essential data.

Code is data for the compiler. The compiler transforms source code (input data) to object code (output data). Code is data also for the interpreter. Algorithm contains all the essential information about the data transformations to be performed, and code in form of programming language text, is an implementation of that algorithm in the specific programming language. Algorithm is programming language independent. It's fair to say that by code I mean the algorithm. But also an algorithm must be presented in some form or another, so its code. :)


There are number of things you can do to improve your situation. Obviously you should not have packages loaded that you don't rely use. There are also ways to perform "lazy" loading, so that memory image is minimal until a package is really needed. I'm not using this myself.

My usage model is such that I only start one Emacs and use emacsclient to add files for editing from terminals. Emacs is running all the time (weeks/months).

Since Emacs is my primary interface to my Linux box, I give it some priviledges. In the .xsession I do:

vmtouch -t $HOME/.emacs.d; vmtouch -ld /usr/bin/emacs-gtk

Which effectively ensures that critical Emacs images and data is present in the memory all the time. Check out vmtouch utility. It might be useful for other purposes as well.


My problem is really not the startup/loading time, but the regular stutters and long latency for input and commands.

I need my editor to "feel" really smooth and instant during editing. Emacs just never gives me that experience.

I think many long-time Emacs (and Jetbrains IDEs, for that matter) users just don't notice how laggy it is because they are so used to it, or are not very latency sensitive.


As a vim user who's transitioned to Emacs with evil, response latency just feels so much better in vim, and although I love and use the daemon-client tip, it misses that point.


I've been running Emacs for almost 30 years. Stability has never been a problem and the last 5-10 years it has been rock solid for me. I can run Emacs for months without restart. Hence instability must be an issue with some packages.


Welcome back...


Thanks for offering him some punctuation, he could probably also use some ,,,


There are some contradictions in here. If you map your functionality from software (on CPU) to FPGA, your software becomes hardware. FPGA is hardware. FPGA is retargetable, but it is not programmable in the sense what SW requires. The resources are preallocated, which is the essential property of hardware.

In software you allocate and deallocate memory, which is your dynamic resource. You don't have that in HW. You can implement a memory manager in HW, but your system will not be flexible enough to be soft. Also you can synthesize your CPU on FPGA, but the performance is poor compared to a hard CPU on FPGA.

Currently FPGAs are becoming popular in accelerating software functions to offload the CPU. The most powerful FPGAs have a lot of fixed logic CPUs in them (i.e. hard macros).

You can't just say that I'll just go ahead and run my software functions as hardware. For certain type of workloads you always need software, in practice (not in theory).


Does it matter which workloads always need software? If anything, the networking use-case shows that a workload, any workload, that is ubiquitous & can benefit tremendously from programmable hardware is what matters.

In a shared-memory system, just implementing libc (or the Java VM, or the Erlang VM) on an FPGA might be a win. It has to be enough bang for the buck for FPGA, but not so much that somebody would make fixed-hardware for it.

On that note, haven't networking end-points had fixed hardware also for ages now? Maybe it is inevitable that a successful application of FPGA's breeds interest in fixed-function hardware for it.


Bug report: The game mostly work, but sometimes it builds bogus rows on the bottom. Makes it more difficult to get high points. Debian Linux with Chrome...


I can't tell if you're joking or not, but that's not a bug, that's just what happens in some versions of Tetris.


OK. Not a bug, a feature. :)


That came to mind immediately. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: