Hacker Newsnew | past | comments | ask | show | jobs | submit | newbye3's commentslogin

I am 60 years old and as a person always learning I have accumulated a huge stack of knowledge and I think my mind is in a very good shape so I still can learn new things quickly. Long time ago I found a good balance between life and work, and a good paid job by european standards. The only problem is that sometimes I envy the 200K $ paid jobs, but there are two main walls to pursue that salary: First: I have no work experience in the IT sector. Second: I have never used AWS and other cloud services.I am risk adverse and receiving a huge bill for cloud services while learning AWS is something that I have avoided.

I can solve leetcode and hackerrank problems more or less easily, have a CS and math degree, a Ph.D. in Math, can program in a long list of computer languages, know a lot of stats, machine learning, algorithm, complexity, etc.

But at 60, if you have learned to focus in what is important, you will be able to provide an answer to many unknowns and that is a precious skill. So to summarize, at 60 I am in my best moment but you always want more (now is more money).


I can't believe you have a PhD in math, can do leetcode easily, but AWS is too much for you.

I will personally show you around AWS if you're up for it. Although, I only know EC2, but arguably that's the only thing that matters :D


Why would they need to learn AWS? Their other skills, knowledge and experience is too valuable to do AWS plumbing work. Work can be delegated in areas that one is not skilled in if they bring other valuable skills to the table.

Unless the desired goal is to be an AWS engineer but that’s a different story.


I don't know why they would need to learn AWS, ask them. They mentioned it, not me. I just offered to help.


I worked at AWS and have architected large scale systems, and AWS is too much for me. The problem is that many of their solutions are super focused and they have ecosystem lock-in (i.e. lambda) which allow it all to come together (at a price).


What is a better route for someone wanting to run a personal project but also learn the tooling that would be needed if the project "took off" in the sense of using a lot of bandwidth or having a lot of users/data? If that question is too broad... Is lightsail a good starting place within AWS? Or is there another service that would allow for less lock in but similar features?


Scale really does require some degree of dimensional analysis. You can go very far with just Amazon S3 and an EC2 host. The place to begin is really with the fundamentals of a single host, and then looking for where bottlenecks will happen per project. That's when you can go through the catalog and find something which may be an ok upgrade to then buy into the lock-in.


I don't think it's too much for him. He just finds it distasteful. I have such feeling for some technologies, like Oracle. It's not that they are super hard. I just don't want to touch it with 10ft pole.


I'm more curious and interested to find out about this guy on the internet who wants to personally offer and guide a random stranger around AWS. Tell me more about yourself, good sir.


If you live in USA, i'm pretty sure you can pull 200k easy even if all you know if leetcoding well.


Have you looked at the free tiers offered by AWS or Azure? It's a great way to learn without spending anything.


https://news.ycombinator.com/item?id=26708148

The service is only free until you reach their limit. The post has 166 comments.


This sounds a lot like "an old guy's excuse".

If you are 60 years old, you learned how to monitor your spending in the late 1960s.

Same skill here.


When they were eight?


Best way to prevent that is to set an account billing alert that will SMS you if your bill goes over $X dollars.

I doubt this feature will ever be implemented, as trying to provide some kind of real-time spend is extremely hard, especially when there are 100+ service bills that need to be aggregated, which usually is a batch process where these things get resolved.


My first impression is that a better algorithm is obtained with one vector for each hash function, but now I am lazy to do the math.


it would be a bit more precise of course, but would use three times the memory at any hash size...


Don't hire me, this version cons a lot )): (defun palindrome(str) (string= str (coerce (reverse (coerce str 'list)) 'string))


I would ask you to implement reverse and coerce yourself.


Perhaps this is the kind of problem in which a macro system, like that in common lisp, allow you to solve the problem paying the prize in a little more compilation time but not in runtime.


At first sight, in the examples: inc1(7) pure, inc2(8) impure, what's the meaning of inc2? (some kind of increment?) Then in the definion of map there is an argument function f, and in the definition there is a new function g that comes from nowhere, so for me at first sight the examples are not insightful.


in prolog the definition of path is something like: path(x,y) := edge(x,y). path(x,z) := edge(x,y),path(y,z).

So to me it seems a little strange the definition of path in datalog from a prolog user point of view.


If you are using integers to represent subsets then int.bit_count is the number of elements of the set. There is a machine instruction to count the number of ones in the binary representation, maybe popcnt is the name, I am not sure.

Edited, more info: AMD's Barcelona architecture introduced the advanced bit manipulation (ABM) ISA introducing the POPCNT instruction as part of the SSE4a extensions in 2007. Intel Core processors introduced a POPCNT instruction with the SSE4.2 instruction set extension, first available in a Nehalem-based Core i7 processor, released in November 2008.

https://www.rosettacode.org/wiki/Population_count


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: