Hacker Newsnew | past | comments | ask | show | jobs | submit | mateja's commentslogin

Might not be the answer you were looking for but hear me out: the biggest impact on my Kubernetes knowledge has been starting a homelab on Talos Linux.

I've used this as a sandbox/playspace/proving ground for Kubernetes concepts to satisfy my own curiosities. The benefit of this space is that you can make mistakes without affecting any real data, and you can blow away your entire config and start from scratch if you need to. I have already seen benefits to this hobby in my career.

My entrypoint was the Talos getting started guide: https://www.talos.dev/

And following the community at https://www.reddit.com/r/selfhosted/


It sure helps, thank you for your answer!!!


Remove “auto” from the title and it’s still true


CMOS and DRAM are completely different processes and cannot be integrated on the same wafer. This has been tried many times unsuccessfully. This is why you see highly integrated packaging solutions like chiplets. These solutions integrate CMOS and DRAM in the same package, not on the same die.


Thanks, this is what I'd love to understand better.

At the highest level isn't it all EUV? Why can't the laser print both patterns onto the wafer?


A CMOS chip doesn't contain any capacitors. DRAM is made of capacitors, and thus has a radically different structure.


Chiplets are also used for that, sure (stuff like radios are occasionally separate chiplet made in different process), but I think main push is smaller chips = higher yields



It's trauma. See Gabor Mate for more about the trauma-addiction connection. https://thewisdomoftrauma.com/


This may suit your needs http://hn.premii.com/


Location: Charlottesville, Va

Remote: OK

Willing to relocate: Yes

Resume/CV: https://dl.dropboxusercontent.com/u/30149950/Mateja%20Putic%...

Email: mp3t@virginia.edu

Third year Computer Engineering, Ph.D candidate, looking for a Summer 2015 internship in computer architecture or artificial intelligence applications. Specifically interested in accelerator architectures for AI applications or related problems, flexible. Previous experience interning with Micron Automata Processor group.


Location: Charlottesville, Va

Remote: OK

Willing to relocate: Yes

Resume/CV: https://dl.dropboxusercontent.com/u/30149950/Mateja%20Putic%...

Email: mp3t@virginia.edu

Third year Computer Engineering, Ph.D candidate, looking for a Summer 2015 internship in computer architecture or artificial intelligence applications. Specifically interested in accelerator architectures for AI applications or related problems, flexible. Previous experience interning with Micron Automata Processor group.


The AP is essentially a silicon implementation of regular expressions, augmented with a relatively small number of Boolean gates and counters. It has many simple matching elements that can be connected together to create automata networks that advance in parallel. The output of a single automaton is a Boolean decision, indicating whether the automaton is in an accepting state or not. It is natively a MISD architecture, a rather dusty corner of Flynn's Taxonomy, because all active states receive the input symbol at the same time.

In contrast, FPGAs are much more general purpose and as a result have fewer individual processing elements. Indeed, an FPGA can be made to emulate the AP for small designs, but the AP is able to accommodate much larger (or more instances of) automata on one chip, resulting in much greater throughput or bandwidth, depending on the configuration.

It is fair to compare the AP to FPGAs in the context of problems that can be reduced to regex (augmented with digital logic and counters), but not in a general purpose sense. Just because a problem might reduce to regex and can be run on the AP, doesn't mean that it can be done so efficiently. But there a host of problem domains in pattern matching that do map efficiently to the AP.


They run non-deterministic automata. Their class of grammars is bigger than regexps.

I think it is a good thing they have built.


I don't think this is true. All non-deterministic finite automata (NDFA) can be converted to deterministic finite automata (DFA), and regular expressions are equivalent in power to DFAs

Edit: Actually just read the paper that someone linked to. You're right that their grammar is larger than that of DFAs and regular expressions, but it appears that's because they extended it rather than because they're using nondeterminism.


The problem with DFA is that they explode exponentially for number of choices containing ".*". Then you fail to get a locality of reference, etc. DFA also very sequential.

This is why NDAs are better - you can run several NDAs in parallel with their states and inputs. It basically becomes vectorized problem.


Yeah, so it becomes a space/time tradeoff. I think whether which is better depends on the problem you're solving. Many DFAs don't blow up space exponentially, so you'd rather have the DFA in that case. You're right though that for DFAs with an exponential increase in space requirements you are better off just doing the NDA in parallel.


Like your idea about combining excel and Ipython a lot. So many people start with excel because it makes it easy to enter data and put up a couple plots, but by the time you want to do something more serious, you have multiple worksheets and cross references everywhere that are difficult to replicate into MATLAB or Ipython or a similar environment. Definitely see a market for that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: