Hacker Newsnew | past | comments | ask | show | jobs | submit | 5d749d7da7d5's commentslogin

> can probably get an additional 5% to work with various levels of tweaking

My big complaint is the number of titles that receive a Platinum rating on ProtonDB, in spite of technical issues. I have purchased more than one title with a Platinum rating that had significant issues running on my machine.

This is compounded that Steam/ProtonDB user reports typically have an abysmal level of detail to replicate a working configuration. Someone got it to run flawlessly, but there is so much wine configuration jargon that each time I need to dig into the issues, I waste time on something that otherwise would work on Windows. I would love if there was a 'share my Proton configuration' per game that could be made available to the community.


I’ve experiences this, and it’s annoying. Worse is it’s never obvious if it’s a driver problem.

That said, part of why I know anything about computers was trying to get games to work on my old 486. I sort of miss the feeling of reward of really wanting to play a game and needing to dig into the technical weeds to finally get the right boot disk. Proton almost scratches that itch for me, sadly there aren’t enough games anymore to really motivate me.


I work in a research organization where I am responsible for crunching data and produce reports highlighting the most "notable" results. Not that the other data is uninteresting, but the volume is such that Excel cannot handle it and even distributing it can be challenging for non-computer-technical folks without dedicated solutions.

Instead, I can dump all of the processed results into a table, create some views highlighting analysis X vs Y, and share links that give others the ability to ask questions I had not even considered. Now the user is empowered to ask anything and they do not need to engage me for "simple questions". Everybody wins. I believe there is also an extension that allows you to generate and save new queries through the web interface.

It is not a tool for a professional analyst, but a means to collaborate with others. There are heavier/more feature rich alternatives, but Datasette is my favorite tool for getting results out the door without hassle (can run it off of a laptop after a pip install).


It's so great to hear people using it like this!

https://datasette.io/plugins/datasette-saved-queries is the plugin for storing queries - it's pretty basic, there's lots of scope for improving the story around that.


I have had such enthusiastic feedback from granting people access to the ~full dataset. They have been conditioned to expect whatever subset can fit inside an email or a powerpoint slide. I feel a little embarrassed when people fawn over the utility because it is so easy to get running.

Have not yet had a chance to try the idea, but I am toying with using render-images to bake in pre-built plots + markdown for reporting the output. Queryable report in a file. Dynamic Vega plotting (RShiny-ish) is also in the back of my mind, but that feels too close to magic.

It is an incredibly useful tool, and I appreciate the workflows you have enabled.


This looks super interesting, I have previously considered using the Bayesian Optimization[0] package for some work, but the ability to switch out the underlying algorithms is appealing.

Perhaps a bit of a far out question - I would be interested in using this for optimizing real-world (ie slow, expensive, noisy) processes. A caveat with this is that the work is done in batches (eg N experiments at a time). Is there a mechanism by which I could feed in my results from previous rounds and have the algorithm suggest the next N configurations that are sufficiently uncorrelated to explore promising space without bunching on top of each-other? My immediate read is that I could use the package to pick the next optimal point, but would then have to lean on a random search for the remainder of the batch?

0: https://github.com/fmfn/BayesianOptimization


Yes it is quite easy to switch algorithms via the "gpr" parameter. You just have to write a wrapper class. I am currently working on a repository that discusses how to do that in detail: https://github.com/SimonBlanke/surrogate-models I will upload some examples (GPy and Gpflow) within the next 24 hours.

I think those wrapper-classes will already answer some of your questions. If you have additional or unanswered questions you can open an issue in the Grad-Free-Opt repository.


Titan was the most premium card available and probably not representative of a typical build.


Sure, but it also remains the defining example of "ludicrously expensive graphics card". The card below it, the GTX 780, was $650, which is still more expensive than any current generation consoles. Meanwhile the current top and 2nd tier GeForce cards (3090 and 3080) have MSRPs of $1500 and $700, which are more expensive, but not outrageously so. I just don't think it's accurate to paint a picture of the market from 2008-2017 as affordable in contrast with the current expensive market.


In 2013 you could drop to the mid-tier GTX 770 for $399 and get more performance either Xbox one or PS4.

I don't think you are wrong, just more lenient in your definition of affordable. It is compounded by the supply constraints, and markups over retail we are currently seeing. I can buy a 3090 right now on amazon, but it is $2,450, 5x the console price. Titan was ~2x console price.


Titan MSRP is $1,499; unfortunately the only units for sale on Amazon are from scalpers.


MSRP doesn't matter anymore. These cards are so difficult to get that they basically go used for MSRP, and on the secondary market for nearly a multiple of MSRP.


As are the modern cards that cost $1000. You can spend significantly less that that and still play all the modern titles at high settings and 1080p60fps easily. (Or 4k if you're willing to make the same compromises on framerate that consoles do)


The minimum card you need to play current games on 1080p/60fps/high settings is about a GTX 1070, a four-year-old card which goes for $300 used. Remember we're comparing this to the cost of using a game streaming service.


Obviously at least over the short term the streaming service ends up cheaper (although with stadia charging for games it depends on how many you buy and how much more they are there than on steam, especially with sales). I was just calling out the implication that the standard price for a GPU that runs modern titles well was more than a current gen console, it's not.

(Amusingly I'm running the exact card you mentioned, and I did buy it for about $450 NZD used, although the going rate is more like $350 now on the local used market).


Should that be future-proofed and be per mm^3? I was under the impression that designs are becoming more three dimensional.


According to the Bekenstein bound, the maximum entropy in a given region of space is proportional to the area of a surface containing that space, not its volume so to really future-proof it, we should probably go with per mm^2...


Maybe for memory, but I think it's too early for logic where heat transfer forces you to keep volume per surface area under control. Legitimate 3D logic will require a radically parallel compute model, probably something neural, and until we see what form that takes I think it's better to keep the metric 2D.


What will it take for there to be a sel4 router/firewall I can build/buy? Anything directly connected to the internet needs these kind of security guarantees.


The two big things are a network stack that can act as a router (SeL4 has no network stack at all) and drivers. I;d imagine that a decent portion could be ripped from another open source operating system and run in userspace in SeL4 and get some benefits, but I'm not aware of any real effort to do that. Otherwise the usual way to use SeL4 as a hypervisor and just run linux under itm which gets you some benefits but probanly not the ones you'd want for a router to be as secure as possible.


This. Unfortunately, we haven't progressed much beyond Windows NT getting a TCSEC rating: first, remove networking. And then the floppy/cdrom. Then go down the list of other things you can't have.


Would a good first target here be SDN switches? In general, their own networking is mostly for remote access.


Simpler versions of such devices exist, eg HENSOLDT Cyber markets a secure VPN gateway that is built on seL4. Over the years we've been in on-and-off discussions with vendors of networking equipment, it's always a battle between those in the company who take security seriously and those resisting any rocking of the boat. Having said that, we're presently in such a discussion again and it's looking promising so far. However, it's entirely possible that someone has already done it and is not talking about it...


A possibility would be for Genode to actually go through their roadmap.

They had a "Use case: Genode-based network router" target for May, but it is still outstanding.


Is there a use case for a microkernel firewall? If you compromise just the tcp/ip stack, you can send and receive arbitrary packets, which is exactly what you would use a root exploit on a monolithic kernel to do.


>They had the 'natural gift' of picking up on the clues hinting that DNA, rather than protein, was central to the mechanism of biological inheritance.

That was Hershey and Chase [0]. Watson and Crick were the first to solve the structure of DNA.

[0] https://en.wikipedia.org/wiki/Hershey%E2%80%93Chase_experime...


Point taken, though that article also says "Hershey and Chase concluded that protein was not likely to be the hereditary genetic material. However, they did not make any conclusions regarding the specific function of DNA as hereditary material, and only said that it must have some undefined role."

Arguably, my general point still holds, though with an expanded cast of characters.


I recalled this story [0] where the Koch brothers said they would organize $1 billion to conservative groups for the 2016 election.

[0] https://www.cbsnews.com/news/koch-brothers-network-will-spen...


Important follow-up story the next year. https://slate.com/news-and-politics/2016/05/the-koch-brother... “The Koch Brothers Were Supposed to Buy the 2016 Election: What happened?”

As the article notes, Trump (who was not Koch-backed) won with very little spending compared to his big-money backed conventional GOP opponents. They decided to step back and spend less in 2016 as a result, since they didn’t support either Trump or Hillary.


>the Intel i5/i7/i9 were the processors of choice for gaming, but AMD's Ryzen line has been making remarkable inroads

I just checked my go-to hardware recommendation site, Logical Increments[0], and not a single Intel CPU makes the list. AMD everything.

[0] https://www.logicalincrements.com/


The new Zen3 series crushes the top of the line Intel desktop CPUs.

I'm on the 3900XT, one of the last from the Zen2 line - it's a ripper, yet I can also just drop in a 5900X or 5950X without having to change the motherboard.

Notwithstanding the many Intel security issues where the remedies nerfed performance, between Apple and AMD, I can certainly see Intel in a bit of struggle.


The idea it's recommending a 3600 over a 10600k for a thousand dollar build tells me there's a slant...


The 10600k is $100 more expensive and they're putting that money into the gpu instead. That makes sense to me.


The cheapest 3600 in stock is $80 cheaper than the cheapest 10600k from the same retailer.

And they've has offered the 10600k with a $20 rebate on motherboards for months now...

$50 should come straight the case which is totally overshooting for a budget PC (I recommend the 300L here)

That leaves you with a 10 dollar difference for an CPU that competes with the i9 in gaming...

-

And by the way, even the next build up chooses a 3600X over the 10600K despite the former even beating the 3700X in gaming benchmarks.

The 10600K is an insane value for gaming, and still fast enough for non-gaming tasks. Seeing as they're listing "heavy gaming" as a measurement on these builds it doesn't make any sense for at the very least the i5 to make an appearance.


Well, but you want to spend money on the GPU if you care about gaming.

Consider these two builds:

1. Ryzen 5 2600 ($150) + RTX 3070 ($500) for $650

2. i5-10900k ($280) + RX 5700XT ($330) for $610

The first build with the RTX 3070 is far better. The RTX 3070 buys you into 1440p high/ultra @ 90+ FPS, or 4k med/high @ 60+ FPS. Will the 2600 bottleneck that GPU? Maybe, but you still get better performance for the price by investing in the GPU versus the CPU.


Who said anything about cutting GPU performance?

Also your comparisons are quite poor... 2600 shouldn't even be in the running here, the 3070 would be held back from doing the thing it does best, high FPS 1440p (in actual games mind you, not just CS:GO)


I mean, you said I should waste my money on a CPU upgrade that would make no difference in 1440p gaming. That's money I can't spend on a better GPU, which would make a difference in 1440p gaming.


You realize you're mentioning a 2600 right? Which will place a tremendous limit on 1440p gaming?

My suggestion over the included build is spending $10 more on the build and getting a 10600K.

Silicon Lottery has found 100% of 10600ks will do 4.7 Ghz sustained all core.

Even at that number it will easily out perform a 3600 in a meaningful way. Over 70% of them do 4.9 Ghz which where it starts to reach i9 levels of performance in gaming by the way...

-

And the cherry on top over the 3600 is you can actually buy the i5 outside of Microcenter. Microcenter is the only place carrying the 3600 for $180... but they also have 10600k on perma-sale for $250.

Meanwhile outside of Microcenter the 3600 is rarer than hen's teeth while the 10600k is widely available at $270. I happen to have 5 microcenters within an hour or so of me, but most people don't have that luxury.


A 2600 + RTX 3070 will outperform a 10600k + 5700XT in 1440p gaming. You're way overestimating how much a 2600 would bottleneck the GPU.


Making a wrong statement confidently doesn't making true.


Benchmarks:

Ryzen 5 2600 + RTX 3070[1]

    80 FPS - Ghost Recon: Breakpoint 1440p / Very High
    78 FPS - Horizon Zero Dawn 1440p / Ultimate
    72 FPS - Red Dead Redemption 2 1440p / High
    115 FPS - Death Stranding 1440p / Very High
    220 FPS - Doom Eternal 1440p / Ultra
    124 FPS - Resident Evil 3 1440p / Max
    75 FPS - Gears 5 1440p / Ultra

i7-10700k + RX 5700XT[2] (I couldn't find benchmarks for an i5-10600k with this GPU from the same source. The 10700k should be as good or better than the i5 though)

    67 FPS - Ghost Recon: Breakpoint 1440p / Very High
    73 FPS - Horizon Zero Dawn 1440p / Ultimate
    70 FPS - Red Dead Redemption 2 1440p / High
    111 FPS - Death Stranding 1440p / Very High
    185 FPS - Doom Eternal 1440p / Ultra
    104 FPS - Resident Evil 3 1440p / Max
    77 FPS - Gears 5 1440p / Ultra
1. https://www.youtube.com/watch?v=MkQuyRIbWpI

2. https://www.youtube.com/watch?v=hzSxytXsiTc


Those numbers are hilarious, there's a reason you found them on some random Youtube channel with no methodology or explanation...

I hopped on Gamer's Nexuses 3080 review just to get a number for a 5700XT and an i7 and Horizon Zero Dawn comes in at 10 more FPS: https://www.gamersnexus.net/images/media/2020/rtx-3080-fe/hz...

You can spot check the other numbers and find similar discrepancies...

Of course the reason I can't find a proper reviewer doing this exact setup is because the idea of buying a 2600 for a new build doesn't make any sense period...

Even the i3-10100 beats it in gaming, and that costs as little as $100...


You're missing the 5000 series Ryzens, which they just updated this for.

The difference is key.


I'm not seeing any listing for 5000 series in the 1k price range, and the 10600k is actually in stock at or below MSRP...

Meanwhile it trades blows with the 5600X unless you're making a 7zip decompression and Cinebench rendering mule...

I'm guessing that's why they're not recommended? Being able to actually get the CPU is pretty important...


5600x is in the same price range and a no-brainer, once stock stabilizes.

There's no sense in intel right now when doing a new build.


> Once stock stabilizes

Giant asterisk during a global pandemic no?

And that's paying $50 more than what is apparently already too much for almost equivalent gaming performance (slightly better after overclocking and the motherboard and cooler priced in above support that too)


> Git will be hard to beat

If I were the Anu people, I would focus on having a seamless compatibility layer that could manage Git <-> Anu repositories (there are undoubtedly many headaches that would occur synchronizing the two different models). This would allow developers to silently interact with ongoing git repos using the "better" tool. Getting wholesale migration to a new platform seems a significant challenge, but allowing developers to slowly build mind share with an improved workflow would be possible.

Disclaimer: I hate git.


`cargo install anu --version 1.0.0-alpha --features git` gives you a one-way incremental import, with the command `anu git`.


Compatible with old projects is the main reason why modern software suck, look at zsh, c++. Some times we need to shift to a brand new paradigm.


Git (and less popular competitors that are close in age and design principles, like Mercurial, Darcs and Fossil) is what everyone is using and therefore what needs compatibility, not an "old project" holding back progress.

That role is filled by Subversion, CVS, VSS etc. with their tragic anti-features.


For what it's worth, darcs can consume and produce git import/export streams. I've synced a darcs repo with git and mercurial mirrors with cron (to avoid slowing down commits by doing it in a hook).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: