My takeaway from this is that a lot of people disagree on what "edge" is. IMO, "edge" is the furthest resource that you have some computational level of control over. Could be a data center, could be a phone, could be an IoT device, could be a missile.
EDIT: I think I'm realizing people will disagree with me because I have a different perspective. For my use cases, my data comes from sensors on the edge, and so for me, I want my edge computing to be as close to those sensors as possible.
Not quite. Nearly every service out there has computational control over the end client (whether a mobile app, browser JS etc.), but very few are focused on edge compute at that level.
It is more helpful to think of it in terms of a traditional client-server architecture, where you want to move the server as close to the client as possible. This covers 95% of what people mean when they say edge compute.
I do see your point, and I agree that covers most cases, but can it really be a definition if it doesn't cover all cases? Are just talking about a distinction between "edge" in an architecture vs "edge computing?"
It's actually really helpful. Looking at a lot of js frameworks I thought I understood what they meant, but now I understand that the term is actually pretty ambiguous.
To me edge is when you are doing most computation and data retrieval required for the given request in a data center close to each user. How to define close? Probably when you have at least several dozen POPs spread out across the world
Officially, my employer has moved on to three common "in office" work days per week (T-Th). Unofficially, that's a goal and people still often work from home during those days. But it's nice to at least have a compromise. It isn't the one I'd prefer (1 or 2 days only), but it acknowledges that employees want to keep it, and similarly employees acknowledge that for our particular industry we do sometimes need in-office or in-lab time. It also allows employees to push back on any in-office meetings on non-common days per official policy.
Can someone that understands ML better than I tell me if there is a point where the AI can indefinitely train on data generated by other AI? If AI is trained on human development work product and then it eliminates human developers, will the capabilities of the AI be stuck indefinitely at the level of the software from which the models were trained? Not sure if I'm making sense, but the crux of my question is: can AI effectively generate their own training data sets? If not, then I don't see how it could replace an industry.
> [Ben Weber] set about organizing a tournament for StarCraft AI agents to compete against each other, hoping to kick-start progress and raise interest.
> The announcement for the tournament was made in November of 2009, and the word soon went out on gaming websites and blogs: the 2010 Artificial Intelligence and Interactive Digital Entertainment (AIIDE) Conference, to be held in October 2010 at Stanford University, would host the first ever StarCraft AI competition.
[...]
> the only way to really test and improve the agent would be to play against skilled human players. Flush with pride that the agent could defeat the built-in AI, we played a game during the class against John Blitzer, a post-doc in Dan’s group who played ranked ladder matches on International Cyber Cup (iCCup).
> It was a disaster.
[...]
> Manually iterating through parameters and making adjustments would take far too long, however.
> Instead, we let the Overmind learn to fight on its own.
> In Norse mythology, Valhalla is a paradise where warriors’ souls engage in eternal battle. Using StarCraft’s map editor, we built Valhalla for the Overmind, where it could repeatedly and automatically run through different combat scenarios. By running repeated trials in Valhalla and varying the potential field strengths, the agent learned the best combination of parameters for each kind of engagement.
[...]
> Recruiting Oriol as our “coach” helped us apply the final touches. Oriol had played StarCraft at the pro level before retiring and turning to a life of science, and he joined the team as our coach, designated opponent, and in-house StarCraft expert.
> With a high-level human expert to test against and all of the algorithms in place, the agent progressed rapidly in the last few weeks, culminating in that first victory against Oriol mere days before the final submission.
> Like OpenAI, DeepMind trains its AI agents against versions of themselves and at an accelerated pace, so that the agents can clock hundreds of years of play time in the span of a few months. That has allowed this type of software to stand on equal footing with some of the most talented human players of Go and, now, much more sophisticated games like Starcraft [2] and Dota [2].
Note that they are lucky there to have a controlled environment, meaning that experimentation is cheap, with clear goals - something that is not always the case in "real" life !
To that parent's point, < 1% has gone to charities which means the foundation is just spending money on the people that work in and own the foundation. I wouldn't need to get paid directly if I had a foundation that paid for everything I need or want.
If you have $1B you can already pay for everything you need or want. Donating it to your own non-profit, only to pay for your own expenses, is an extremely expensive way to "launder" that $1B into a tax deduction.
If I understand you right you're saying that a 501c3 is a net gain in money?
Would you mind walking me through that?
You start off with $1B net. Presumably these are capital gains. You donate it to your own 501c3, and get 125M back on your taxes. Now you take out salary (taxed at what, 37%?), and get a total of $755M (including the $125M). No, that was a losing bet.
Ok, so you also spend it on other things. But to net gain on this you still have to actually spend less than $125M not only on actual charity, but also on overhead.
Basically can you launder this money at less than 12.5%? Keeping in mind that once laundered it's still not really yours. Sure, you control it. But it'll always be just mostly assets under your control.
You can't use that money to buy Twitter.
Also keep in mind that real money laundering can cost about 20-30%.
Is it even worth 12.5% if the billion comes with these huge restrictions?
Wife and I use Bitwarden. My master password is in hers and hers is stored in mine. I once toyed with the idea of building a "dead man's switch"-type service that requires clicking a confirmation button of some sort at a regular interval and which would then send any info you want in the form of emails and/or a physical courier. I looked around and saw a few available so I dropped it. But I haven't subscribed to such a service myself.
This feels oddly specific. I'm not sure why you can't engage on Twitter and still not be "lacking in real life." Who even gets to define "real life?" And while I'm sure some people spend 20-30 hours per week on Twitter I'm guessing it's such a small percentage of the world that it might as well be statistically insignificant.
> I'm guessing it's such a small percentage of the world that it might as well be statistically insignificant.
Exactly! The vast majority of content on social media is produced by a vanishingly small slice of the world's population. The views expressed should not be understood as representative.
At the very least is a workable hypothesis. To be active on Twitter one needs the right personality, which means being very upset when other people reply/engage or being indifferent and playing one of the games adults are playing. In the first case, the person is not able to avoid engaging. In the second case, they engage because they have the usual "motives".
Twitter is a very dangerous social media. I consider myself a wordly and experienced person, but I admit I tend to over-value what is shared on Twitter (momentarily, because I look back occasionally at bookmarks and I have very little or no memories of those tweets or I cannot understand why I bookmarked them).
I over-value (and not properly value) because I have no clue who is the person who's tweeting (case 1, why should I listen to them? Who are they? Would the same observation "hold" is a face to face conversation?) or I know them/they are public figures (case 2), and they are playing a game of popularity or relevance in which I am, as part of the audience, the sucker.
Just to make an example, the other day someone wrote that "the US should ramp up oil production now". I read it and I told myself "Ok". A reply-guy replied "what are you talking about, this is not like software, when you can "easily" scale up the number of servers". And I thought, man, I was really not thinking, my first reaction when reading a twitter should be "this is bs, who is this person, where is the competence coming from, what it the game they are playing now", but it was not my first reaction, which was instead of passive acceptance.
Dangerous game.
Not to mention, you can't even express this thought on twitter. Way too many characters.
I think the character limit creates a blunt form of communication that leads to this toxic environment. It is practically designed to create misunderstandings and dismissive short responses to those misunderstandings.
> This feels oddly specific. I'm not sure why you can't engage on Twitter and still not be "lacking in real life."
It all depends on your definition of "engaging on Twitter". People reading their compiled follow lists and occasionally posting a thing or two are one thing, and that's definitely doable without "lacking in real life". But I struggle to imagine how one can spend 20-30 hours a week engaging in wild debates on twitter and not "lack in real life".
I've noticed similar tendencies in myself recently, but with Discord instead of Twitter. After doing some prolonged soul-searching, I found that to be one of the main reasons.
I think "the problem with LTT" is that, as time goes on, they've slid off the purely informational stuff and into the whatever-gets-the-most-clicks stuff. I don't mind a little bit of humor or personality (Digital Foundry is great in that regard), but when Linus started uploading videos that defended his use of click-baity thumbnails and the bribes he received from Nvidia/Intel, his credibility fell off a cliff for me. If you're not going to stand for the objective truth, why even bother reviewing hardware? I'd imagine that pressure is what pushed them to invest in this lab, but even then I have a hard time trusting them.
Linus is welcome to chase whatever niche market he wants, but as a "purely informational source" he's got a pretty marred track record these days.
Why do clickbaity thumbnails matter more than the content of the video? Linus has made it clear that he hates using them, but videos with them consistently get way better viewership, which is kinda essential to keep the channel running.
I'm also very curious to see a source on the "bribes he received from Nvidia/Intel", because I'm not finding anything that looks relevant on Google.
> Why do clickbaity thumbnails matter more than the content of the video?
Take for example this recent video: "We ACTUALLY downloaded more RAM" [1] complete with grinning youtube face holding a stick of RAM marked '10TB' - and it's complete bullshit.
How can I trust the opinion of someone who publishes such embarrassing nonsense?
I like how you quoted my question and then completely ignored it. The fact that you disliked the title of a video is not in any way a meaningful criticism of its content.
But okay, let's take a closer look at that video. When I saw it in my YouTube recs, I rolled my eyes at the clickbait and skipped over it, but I didn't see how it makes LTT look bad. In fact, I just gave it a fair shot and skimmed through it for myself, and it actually looks like a pretty solid explanation of memory hierarchy and swap space for beginners, packaged in a format that will increase its reach. I don't see what's bullshit about that.
Look, say what you will about clickbait, the unfortunate truth is that it gets views, which channels like LTT need to survive and grow. Linus is on the record saying he hates it, but they've done the tests to confirm that the stupid thumbnails and titles just perform better.
And come on, let's be honest here: How many people are going to click on a video titled "What is swap space?" or "You can use Google Drive for swap space on Linux" or something similarly boring? Even the best explanation in the world isn't going to get traction with a title like that. I looked for comparable videos and it looks like "What is Linux swap?" by Average Linux User (https://youtu.be/0mgefj9ibRE) is the next most-viewed video on the same topic. That video has gotten about 90,000 views since it was posted in 2019. By comparison, the LTT video has averaged about 100,000 views per day in the 16 days since it was posted.
So it looks to me like LTT took a technical topic that most people would never think about, found an angle to make it interesting to random people browsing YouTube, and tricked potentially thousands of people into learning something. What exactly is embarrassing about that?
Not sure about his other stuff you claim, I'm not a super big video guy for tech things (just let me skip and search ahead easily) but this came up with my friends a few years ago after people started noticing many videos from various creators going to this format of thumbnail.
EDIT: I think I'm realizing people will disagree with me because I have a different perspective. For my use cases, my data comes from sensors on the edge, and so for me, I want my edge computing to be as close to those sensors as possible.