Hacker Newsnew | past | comments | ask | show | jobs | submit | feffe's commentslogin

I'm a believer in restricting the scope of definitions as much as possible, and like programming languages that allows creating local bindings for creating another.

For example:

    local
        val backupIntervalHours = 24
        val approxBackupDurationHours = 2
        val wiggleRoomHours = 2
    in
    val ageAlertThresholdHours = backupIntervalHours + approxBackupDurationHours + wiggleRoomHours
    end
Then it's easier to document what components a constant is composed of using code without introducing unnecessary bindings in the scope of the relevant variable. Sure constants are just data, but the first questions that pops into my head when seeing something in unfamiliar code is "What is the purpose of this?", and the smaller the scope, the faster it can be discarded.


Mentally discarding a name still takes some amount of effort, even if local.

I often write things the way you have done it, for the simple reason that, when writing the code, maybe I feel that I might have more than one use for the constant, and I'm used to thinking algebraically.

Except, that I might make them global, at the top of a module. Why? Because they encode assumptions that might be useful to know at a glance.

And I probably wouldn't go back and remove the constants once they were named.

But I also have no problem with unnamed but commented constants like the ones in the comment you responded to.


Steam on Linux works really well now. I sort of built my own steam machine a few months back with a framework desktop that now sits in my TV rack. Gaming on it is a really good experience. Had to buy a PS5 controller though because I could not get the XBOX controller to work over bluetooth which was a bit of a bummer. For me the new controller is most interesting as most games have XBOX controller support (with xbox button captions) and the steam controller adopts the button naming.


I just built one of these as well. For your Xbox controller, see if this works: find any Windows PC and download the Xbox Accessories app. Connect the controller (via USB) and update its firmware. Once I did this, I was able to pair it with the framework desktop via bluetooth (under linux) reliably, and it's been rock solid ever since. Apparently some of the models shipped with buggy firmware that linux really doesn't like for whatever reason.


If you still have the xbox controller, I'd recommend the dedicated USB wireless adapter. It's reasonably priced and very solid.


Especially if you ever use more than one controller at a time, a dedicated dongle is essential.


I tried several solutions, including an old PS3, Xbox One controller (with the official dongle) and I ended up buying an 8bitDo xbox controller. They are well manufactured (better than the xbox controller), has a built in batter (unlike the xbox controller) and has a usb dock for charging.

Highly recommend them.


CMake has become the defacto standard in many ways, but I don't think it's that easy to deal with. There's often some custom support code in a project (just as with make files) that you need to learn the intricacies of, and also external 3pp modules that solve particular integration issues with building software that you also need to learn.

For me, base CMake is pretty easy by now, but I'd rather troubleshoot a makefile than some obscure 3pp CMake module that doesn't do what I want. Plain old makefiles are very hackable for better or worse [1]. It's easy to solve problems with make (in bespoke ways), and at the same time this is the big issue, causing lots of custom solutions of varying correctness.

[1]: Make is easy the same way C is easy.


I didn't say "easy to deal with", I said it's not bespoke nonsense, and that you could keep it mostly unchanged today, 8 years later.

Plus - the "obscure third party modules" have been getting less obscure and more standard-ish. In 2017 it was already not bad, today it's better.


I don't see the point of passing the size to a "free" function. I don't see how it could be used to speed up de-allocation. Additionally most usage would probably not want to keep the size around.

But I concur that realloc is mostly pointless. For code that want to grow or shrink, I think it's much better for it to know the data block size. I think there's very little opportunity to happen to have free memory next to your allocation that can be "grown into". At least for slab like allocators, so the growing room is minimal.

It's a bit difficult to unify all APIs because data will be needlessly passed around, when in most cases you don't care. Aligned allocation may also need a slightly different implementation anyway.

realloc and calloc are warts in my book...


I actually did at one time, and it was fantastic. Then Agile was rolled out in that organization and ruined everything. Oh, the irony.


It sounds like some top-down management style mislabeled as agile was introduced.


So about 99.99% cases of "agile" from corporate standpoint.


In my experience 0%, if this is important working on your interview skills can help.


As so called "Individual Contributor", my interviewing skills have little impact on people 2 or more rungs of hierarchy above me.

Because that's who pushes "bad agile" so much, who is targeted by all the SAFe propaganda, who tells you to shut up or be fired when you tell them their brilliant ideas about how SCRUM is to be done across the company result in unnecessary friction with no benefits. (I nearly quit there and then because of that talk)

And due to geographical constraints I can't exactly shop around as much as if I lived, let's say, on west coast of USA.

So, "working on my interview skills" doesn't change anything, except maybe grinding leetcode so I can tasteore of the fecal rainbow of corporate agile.


I've left 2 companies that have reorganized around SAFe, and I'm currently at the third one now. It's a malignant egregore that I can't escape


Hmm, I was a hang around back in the day. Not one of the big boys. But I got to say some of the young kids that I hung around with then had more skills than many I've worked with in the industry since. Going on my 25th year as a "professional" SW developer.


- Multiple out file dependencies are also difficult to describe in make (GNU Make). There's a dedicated section for how to solve it in the manual[^1] but it's a crutch.

- Taking environment variables into account as a dependency (they may affect Makefile logic and also code generation by tools that interpret them).

[^1]: https://www.gnu.org/software/automake/manual/html_node/Multi...


Multiple outfile dependencies (aka "grouped targets") can be specified in GNU Make via `&:` since 4.3.


Perhaps this is similar to how they work in go?

"var x []int; fmt.Printf(`%p %d`, x, len(x))" outputs "0x0 0"

Indexing "x[0]" results in: "panic: runtime error: index out of range [0] with length 0"

They can also be appended to and then produce a valid slice.


I think most applications don't have any dependency towards a specific page size. They use malloc (C) or new (C++) to allocate memory which does not expose this constraint.

You need to care if using mmap directly to map files or other resources into the virtual memory address space. The default page size can be queried using for example sysconf() on Linux. I guess something like garbage collectors in language run-times would also use mmap directly as it's most likely to side step malloc/new.

An application would normally not use madvise, unless also using mmap for some special purpose.

It depends on the CPU architecture how flexible it is with different page sizes. For example, from what I recall, MIPS was extremely flexible and allowed any even power of two size for any TLB entry.

x86_64 only support three different page sizes, 4 kB, 2 MB and 1 GB and there are limitations wrt the number of TLB entries that can be used for the larger page sizes.

So, yea, there are bound to be regressions if just trying to switch to 2 MB as a default but I think it should be doable. Not all archs use 4 kB to begin with.


Can the CPU cores in a CCD access the L3 cache of another CCD with higher latency? If so the CCD without extra cache may still get a performance boost.

I know there has been such designs in the past but I don't know how it works in the Ryzen CPUs.


Speed of cache between CCDs has always been much worse than within one CCD.

At the same time, that latency is still peanuts compared to hitting main RAM.

The die with the cache probably has better latency (provided the cache doesn’t connect through the IO die), but lower clocks making it better with memory limited workloads.

The other die will be better at non memory bound work, but should still be much better than normal at memory bound tasks too. I suppose it remains to be seen if lower latency and lower clocks beats higher latency and higher clocks, but I suspect 10% higher clocks won’t compensate enough for cache hits being several times faster.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: