Not particularly directed at you, but it's certainly a dichotomy in that managers are the only ones responsible for "rethinking another's employment". I'm curious to know how often you would consider reflecting on / rethinking your own employment?
I say this because I was recently hired to work on a project where managers are obviously the only ones responsible for deciding how a developer should function within the organization. It is obvious the state of the project is anything but healthy at the moment. Attrition rates are astronomical and the code is beyond ugly in a lot of cases.
"rethinking employment" was a clever way of "firing someone who cannot take direction".
If you're suggesting a manager sometimes needs to self-reflect, sure, but a good manager does eventually have to make the call that someone isn't up to snuff.
There are several people in my department who probably should be "rethought" or let go, but management is too soft to make the call. It's actually more harmful to several, who could easily move on to other roles.
If I (male) had a chance to be on a permanent comfortable welfare, I would probably spend more time playing games, doing sports, socializing, traveling etc. instead of being in a noisy office with bad light around desperate people I often dislike, whipped by my boss in a constant rat race, being refused bonuses for great performance as a new car/boat/house is of a sudden interest to my superior.
My guess is the doctors who don't spend extra time trying to solve the case probably shouldn't be paid for the time separately as is being suggested here.
It would appear to me that they have already lost the innate curiosity (if they ever had it) that made them want to become doctors in the first place.
Or they're not confident enough in their ability. Doctors are already paid substantially compared to most other professions, so for them to not put in extra time on the exceptional cases makes me feel they probably feel entitled. Too much so to have any impact on the rare cases.
The doctors who could realistically find the solution are the ones who will likely want to take the case on for free.
The idea that we should count on people to spend additional time on their job for free because they enjoy it seems overly idealistic. Sure, such people exist, but the easiest way to incentivize more time on a job is to just pay people accordingly.
A great example of why I think the entire core ecosystem of the web is backwards.
I've had a number of comments here on HN related to this. Imagine this same scenario, except instead of the User being in charge of deciding which engine is used to render a website / webapp, the Developer is making that choice. What would a developer need to do if they were in control of which "engine" rendered their website on the user's computer?
Instead of being at the mercy of Google / Chrome, the developer of said site could simply change their HTTP Header "X-BrowserEngine" or something like this, and the client's computer would know how to (a) download the new engine if it's not on the computer already (b) sandbox the new engine (c) run the site / app in said engine.
I've called this idea the "Meta Browser" in the past. It's a concept for an app that sandboxes and runs sites on different browser engines seamlessly. The user experience is more or less as though they're continuing to use a single app to browse the web, but behind the scenes could be any number of custom engines rendering the content.
What if anyone in the world who had an idea for a "new web", could build it tonight, and have it used tomorrow?
My solution to the "new web" problem is radically different (though to be fair, trying to reinvent the web is in itself a radical idea).
I've posted my idea to many similar threads, so I apologize for repeating myself, but I feel pretty passionately about it and I feel it's a good idea to try to spread. Until I start receiving convincing evidence / arguments that the idea isn't worth spreading I'll probably continue.
Imagine if we didn't have to decide what the "new web" was going to be, BUT we did allow that experimentation to take place? I say we shouldn't make it a requirement to "convince people it's the right thing to do before it gets built and people start using it".
What if users didn't use "browsers". Instead they used "meta browsers". An application which hosts browser engines. And not only could apps / documents / etc. be downloaded by this "meta browser", but the experience of switching between browsers was also seamless to the end user. If they didn't already have that browser engine they would be prompted to download it if a particular app / document developer decided they were supporting it.
In this "new web", the "document / app" developer decides which browser engine the "meta browser" should render their app with.
It probably feels like they're on a suicide mission (for their browser), but why should it be a "requirement"? If the ideas are bigger and better and eventually catch on (ie. things we haven't even thought of yet) why should "the web" somehow dictate what core set of ideas are the right ones?
The WWW uses DNS. Think about it: DNS evolved as an upgraded distributed hosts.txt table. Fundamentally in an entry in that name-ledger is a point to a machine which you would log into. Who gets to change that ledger? Its static.
Now we have blockchain tech where names can point dynamically to content or cluster of machines. Ethereum ENS is an early version of this.
Imagine you would want a global map of places on the Internet itself. Like Open-map but so secure you can rely on it, so that self-driving cars can use it. Technically its not impossible anymore. The Web can't do that, because of single authorship of data (way less secure than multi-party authorship).
Ironically Mike Hearn has suggested such system with TradeNet [1], but apparently has missed what is happening with the evolution of blockchain tech.
The next web will be a transaction system, not a communication system - the former is a generalization of the latter. If you're interested in building these kinds of systems - we are startup building the foundations and are hiring.
My day job is working on Corda, which is a distributed ledger platform. So I haven't missed it.
When an app identity is based on a public key, you can start to do things like load them from a BitTorrent style network (if you want to). Or define a traditional CDN as the primary entry point but have slower/more decentralised systems as backups. App identity doesn't change so locally stored data and sessions are not lost.
Can you explain how your meta-browser should differ from an ordinary operating system? I.e. additional functionality it would have when not compared to a browser, but to the substrate browsers currently run on.
In a nutshell. It would only be a simple shell, with extremely minimal API (for user settings, etc), which could sandbox browser engines. This (a) makes "switching between browsers" a seamless experience to the end user, (b) makes the development experience much more friendly to the developer [since they're developing for their browser engine of choice].
Dear god why. What is the point of using the web if I'm going to target specific browser engines only? I may as well write native apps and target specific OSs only.
If your idea were implemented, I'd expect that in 30 years someone will propose the meta-meta-browser so developers can choose the meta-browser that hosts their chosen browser engine.
>I've posted my idea to many similar threads, so I apologize for repeating myself, but I feel pretty passionately about it and I feel it's a good idea to try to spread. Until I start receiving convincing evidence / arguments that the idea isn't worth spreading I'll probably continue.
I didn't read a single word about the idea, but I already like you, because this is the force that drives the change, not that yet another framework based on rotten primitives.
My personal view on topic is that we are in a trap now. Nobody is able to build NewBrowser in sane amount of time (or lifetime at least), so web is the only alternative. But there already are techniques almost always overlooked despite being designed for exactly this sort of thing: virtualization. While wasm looks promising, it is just another waste of engineering in the shadow of hosted full-VMs. Virtual machines already can do anything that web has to provide. Isolation, hardware abstraction, time sharing, quotas, etc. Millions of vds hosting are functioning on the same principles our browsers trying to implement.
I think that newweb must be built as a virtual machine with a little difference that it could actually access the host system to interact with user data or other "sites" or "apps". Not directly, but via networking to 169.254.254.254 or something like that for streaming and shared memory (think DMA) for intensive tasks like video streaming and accel graphics.
For those who's unsure about that approach, I can tell you that I'm running Linux operating system app in a VM hosted by Windows 7 and it talks to host via built-in samba and regular sockets. Moreover, if I need slightly another setup, I simply clone my Linux so that it shares all the data (no copy) but holds distinct changes. The same way your new frontend could just cheap-clone existing VM and apply few fixes over it to actually implement the UI.
Of course, Linux is a bad example, having startup time comparable to modern website loading. It should be really fast and lightweight, I think something like DOS+OpenGL+sound driver should do the trick.
But how do you deploy VMs to the tightly controlled environment of mobile devices?
The good thing of web development today is that you can deploy the client side of any application to any smartphone, since all of them have been built with strong web browser support - and the browser is a fairly complete platform, with mayor browser being available and relatively compatible in all mobile and desktop systems. I don't know of any VM that shares those qualities.
As ARM-VPS are available, proper vitualization does work on both x86 and arm. It means that VT can be implemented on mobile (not sure for somewhat custom iphone chips though). The fact that it isn't yet implemented is no stopper. If all three mobile OSes said "there is your industrial-grade isolated vm, do anything you want and note the time/battery/memory restrictions" then problem would be solved. It should not even be that complex as real "virtualbox", because we don't need to emulate an entire PC, only ui-related parts of it.
>how do you deploy VM
Clone OS-provided vm, put binary into its address space from cache and/or network, run. It connects to host system and other vms via sockets/shm and does its job, sharing real hardware in host-defined, predictable way. Virtual private servers do that everyday.
> If all three mobile OSes said "there is your industrial-grade isolated vm, do anything you want and note the time/battery/memory restrictions" then problem would be solved
Yes, well, being theoretically possible is not the same as being viable and having solved the chicken-and-egg problem. I think webasm has nowadays more possibilities to become adopted as the standard for a common platform.
I think it's a useful thought experiment. Some reasons why it seems like a bad idea:
a) Bugs. Browsers are already pretty much the most complex software out there, and they have tons of bugs. A meta-browser would be orders of magnitude more complex, and so likely have orders of magnitude more bugs.
b) Security. Special case of a) above. Browsers have had decades to gradually iron out security issues, and we still see periodic flare-ups. A meta-browser would start from scratch and be taking on a much bigger problem. You would need some way to allow people to add new engines without causing them to insert malicious code, interfere with other websites' browser engines, steal people's information, etc. It's hard enough protecting people when all malicious websites have is html and javascript. It's much worse when they have the ability to write Assembly. See https://en.wikipedia.org/wiki/ActiveX for an earlier attempt at this.
c) Adoption. Writing a browser engine is a lot of work, which is why there hasn't been a new one in a decade. If every web app had to provide a browser engine nobody would build web apps for your meta browser. Everyone would end up just using one of your default engines, at which point you're back to the current state of the world but with all the complexity of points a) and b) above.
It's worth backing up and asking yourself: what is the problem you're trying to solve. Then we can talk about whether it's really a problem and what the solution might look like, without immediately barreling down the first solution that comes to mind.
That idea is suggested in my article, in the paragraph where I suggest forking Chromium to add a new tab type.
Effectively you'd have web and newweb tabs side by side. You'd get some of the Chromium infrastructure 'for free', like user switching, the nice tab dragging code and so on. NewWeb tabs would not contain the URL bar, back button, reload button, bookmark star, extension buttons etc. But it might reduce some of the mental overhead of having to switch between 'browser' apps.
Thank you for the post, and finding and reading my comment. It's extremely thought provoking, and inspiring to know that these conversations are happening.
The "new tab type" idea sounds like it fits. In a way I see the "browser renaissance", that I think (hope) is going to happen within the next decade, is also more than just about sandboxing browser engines. When you follow the line of thought further I think the browser core becomes supported by a set of decoupled libraries which will be reused by different browser engines.
I think the toughest hurdle to this kind of thing is probably abstracting away the details, but still making it possible for end users to make educated / granular decisions so that they can understand more or less what the security implications of certain actions / settings would be. I imagine those 2 things (user knowledge and need for abstraction / shielding users from themselves) will eventually converge to a happy middle ground. But for starters could (for the least knowledgeable users) probably be something like providing a handful of options like "extra safe", "safe", "maybe trouble", "danger zone".
Though to be fair "danger zone" would probably mean something different than it historically would, since the "shell app / meta-browser" hosting the browser engines in theory would prevent an application from escaping its sandbox, but instead could allow an app, within the confines of the user's settings, to do things the user didn't expect.
"...unless you work at Google or Microsoft you can’t meaningfully impact the technical direction of the web"
I think this is a great argument for why we need a (for lack of a better name) "meta-browser". An application on the user's machine that contains and runs browsers. Then flip the control to the developer. If I'm only going to design for [name of obscure but super secure browser], my success doesn't have to be dictated by the fact that 99.99% of users didn't originally open my browser of choice. If they come across a page only supported by this little-known browser, they are prompted that they can install it, or they can decide to move on to the next website if the developers didn't write any fallback.
This doesn't just ensure the web can remain open, but makes the whole architecture (the web itself) an open question and allows all aspects of "the web" to evolve more smoothly.
I had an idea that I was going to turn into a side project related to this, but find myself too busy to work on it at the moment.
Imagine a simple ML engine that would allow an artist to pipe in a selection of "art", whether that is visual or aural. The engine would then build a network on top of that data. The next step would be for the ML engine to build pieces "inspired" by the artist's selections which would, in theory, inspire the artist.
I've seen a handful of projects posted that are similar to this, but have yet to take the step of allowing the user to select which pieces of music / art the engine should use for inspiration.
Of all the professions that exist today, I think artists have a special distinction in that they are probably some of the only people who will be immune to the possibility of ML "taking their jobs".
Generally speaking. Given that DynamoDB is a NoSQL database service, I'm not certain that moving larger clients to their own dedicated AWS resources should cause too many negative side effects. Especially ones who are so large they're causing scalability issues.
Yes, if that manipulation is driven by you, and if "wealth" satisfies your subjective preferences. So it could be money and assets, or accomplishments, or relationships, and so on.
I say this because I was recently hired to work on a project where managers are obviously the only ones responsible for deciding how a developer should function within the organization. It is obvious the state of the project is anything but healthy at the moment. Attrition rates are astronomical and the code is beyond ugly in a lot of cases.