Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Next, we keep the DERP network costs under control… by trying to never use it. When using Tailscale, almost all of your traffic goes peer to peer, so DERP is only used as a backup. We continue to improve our core product so it can build point-to-point links in ever-more-obscure situations.

This is mentioned in passing, but shows a very good technique. They incentivize technical excellence by tying it to a concrete cost. A free plan, with DERP, is the sacred cow that must not ever be removed. If they don't fix the "ever more obscure situations", then the cost goes up. If they pay an engineer to investigate and fix this, not only does the engineer get to do interesting technical work, and not only does the system become more reliable and "good", they can also think of it as increasing the profit margin of the product (by not increasing costs).

I worked with Avery on Google Fiber, and we did the same thing. Our sacred cow was excellent US-based phone support. That is quite expensive. If there were bugs in our product, users would call in, and our call center costs would increase because we'd have to have more people working. So every week in our team meeting, we would look at summaries of calls, and take on engineering work to address the most common class of problems. That let us scale up the business and still provide friendly and competent phone support, because we were reducing the problems that people called in about. (This was things like having our Wifi access points steer 5GHz capable devices away from flakier 2.4GHz signals, or fixing "black screen" bugs where TV randomly stopped playing for software or network reasons.) Because we had that "sacred cow", every obscure bug that we spent months fixing not only made the product better and were intellectually stimulating to finally figure out, but had a concrete impact on how costly it was to deliver the service.

What most companies would do here to reduce costs is simple. Don't fix DERP bugs, just charge for it. Don't fix "black screen" bugs, just hide the phone number on your website so people can't figure out to call.

Avery has found the perfect balance between cost reduction, interesting engineering, and the somewhat nebulous "good product". Normally conflicting concerns, all living together in harmony. If everyone copied his technique here, the world would be a better place.



I would add that software infrastructure can run incredibly fast and scale incredibly well on modern hardware if you're a bit careful about resource usage.

Traditional relational DBs like Postgresql are very scalable on modern hardware. If you take the time to craft a normalized schema with low redundancy being careful about keeping data small, you can achieve performance and resource efficiency and cache efficiency hundreds of time better than bloated distributed document nosql based systems. You can also get better transactional integrity, reduce reliance on queues (use load balancers instead) and get more instant and more atomic error propagation and edge case resolution. You can really build something lean and mean to a point that would be physically impossible in distributed systems or systems that involve multiple hops over potentially congested networks (as Admiral Grace Hopper likes to remind us, distance matters in computing https://www.youtube.com/watch?v=9eyFDBPk4Yw). As far as I know, normalized relational databases are still the best at efficiently using multiple levels of physically near caches to their full potential. Most applications don't need to scale horizontally and can easily fit on single servers (that can scale to dozens of cores and terabytes of ram).

I hear arguments that it's the engineers that are expensive so it's ok to be wasteful with hardware if it saves engineering time but in my experience the type of engineers that are good at optimizations are often good at engineering in general and the ones that forget to think about efficiency are also sloppy with other things. It doesn't mean you have to write your code in C. Very fast code can be written in high level languages with the proper skills (See projects like Fastify).


> Very fast code can be written in high level languages with the proper skills

I sometimes wonder why we don't have mandatory coursework that demonstrates the upper bound of what a modern x86 system is capable of in practical terms.

If developers understood just how much perf they were leaving on the plate, they would likely self-correct out of shame. Latency is the ultimate devil and we need to start burning that into the brains of developers. The moment you shard a business system to more than 1 computer, you enter into hell. We should be trying to avoid this fate, not embrace it.

We basically solved the question "what's the fastest way to synchronize work between threads" in ~2010 with the advent of the LMAX Disruptor. But, for whatever reason this work has been relegated to the dark towers of fintech wizardry rather than being perma-stickied on HN. Systems written in C#10 on .NET6 which leverage a port of this library can produce performance figures that are virtually impossible to meet in any other setting (at least in a safe/stable way).

This stuff is not inaccessible at all. It is just unpopular. Which is a huge shame. There is so much damage (in a good way) developers could do with these tools and ideologies if they could find some faith in them.



Wow, reading about the LMAX Disruptor is fascinating. Thank you for sharing!


Unfortunately in a lot of cases, it’s _our_ engineers are expensive so we are wasting _your_ CPU/RAM/etc


> ...we would look at summaries of calls, and take on engineering work to address the most common class of problems.

I learned about engineering away support costs from Robert McNeel & Assoc (RMA). They used to sell AutoCAD. The value added reseller racket is to slough off support costs onto their dealer channel, which Autodesk did shamelessly. (Probably learned from auto mfg.)

RMA thought about revenue in terms of ROI and velocity. Instead of increasing margins, RMA reduced transaction costs.

So RMA made AutoCAD add-ons, nominally low price but actually given away, to reduce support costs. Stuff like plot drivers which actually "just worked". Over time, customers learned that TOC working with RMA was lower and hassle-free.

--

I applied this strategy as an engineering manager. I had inherited some products (necromancy) with a lot of technical debt. There was intense pressure to add features, restart the software upgrade revenue (long before subscriptions).

I insisted on also chewing thru our hottest support costs. Burned A LOT of political capital and goodwill. (One of my rationales was that our small team simply didn't have the resources to manage the high support burden.)

The intangible, unquantifiable benefit was that velocity improved for every one of our products. Each release got easier and proved more rewarding.

It was like printing money. Our customers sold our products for us (word of mouth). So we saved even more money (less need for marketing).

We somehow created a virtuous cycle.

--

Back then, I had guessed that tech debt piles up because of suboptimal bookkeeping. The costs of poor quality were externalized. Meaning lazy, aloof devs cutting corners increased costs for QA and tech supp. And worse, negatively impacting sales and marketing.

Today, I'm not so sure. Unifying dev and supp (DevOps) hasn't improved quality. Maybe poor quality is something like a natural law. Sturgeon's Law applied every where. The only big examples of aggressive continuous improvement I'm aware of are Musk (Tesla & SpaceX) and Haier. And probably Apple too.

But it seems they attain their results by whipping their employees. Whereas back in the day, we improved quality by doing less work (taking the time to save time).

FWIW, Sandy Munro of Munro & Assoc remains an outspoken proponent of 90s era concepts like quality, continuous improvement, "muda" (remove waste), etc.

If any one has other links, resources for 3rd millennium era quality obsessives, please share.


This is free market economics: improve the product or lower the cost, or fold.


Or just bribe lawmakers at every level of government so you can keep increasing prices while worsening your product, using the funds from your captive audience to preemptively destroy every upstart challenger.

There is a reason Comcast is everywhere and Google Fiber is nowhere...


Utilities in general are in a similar position. PG&E has a magic combination of regulatory capture, monopoly, and too-big-to-fail.


What happened to Google Fiber anyway? It's not like Google doesn't have the money to do whatever necessary to make it work.


> What happened to Google Fiber anyway? It's not like Google doesn't have the money to do whatever necessary to make it work.

They found out that doing things in real world is hard. So they took their ball and went home. Pretty usual for them.


They wanted to know what would happen if people had super fast Internet. All the big players made a show of getting onboard. Experiment success, and they got out... Actually operating as an Internet provider wasn't something they wanted to be part of their core business- they have enough anti-monopoly mumbling associated with their name as it is.

As for the experiment... I guess the answer was not much? More high definition streaming, some few companies attempting the streaming of games. It certainly hasn't been the panacea of innovation, but it is also still early days I suppose.


IIRC, they bought a company that uses wireless APs to deliver the signals within municipalities/cities. The bureaucracy and cost was too prohibitive for physical fiber. At least that was my understanding of the situation last I had checked.


Webpass, I believe. How do I know? I’m using it right now!

A good internet provider, though living in a snowy city, the signal can deteriorate a bit when you get heavy snowfall. Still usable, but slow in those cases. The symmetrical gigabit is damn nice 99% of the time.


That's the weird thing with Google isn't it? They have the money to do so many things. But also have a reputation for closing up so many things a few years after launch.

It is now at the point where you can't commit to a new google thing and would rather use the competitor instead because you know they will likely outlast Googles corporate attention span...


They had a hard time with doing whatever was necessary to get their cabling on poles and finding sites for equipment.

They also found that when you light a fire under the butts of incumbent telcos, they can deploy modern infrastructure really quickly. And the telcos already know how to get lines on poles and how to site equipment boxes.


> What happened to Google Fiber anyway?

They are alive, and they use no opt-in for their marketing emails. I know that because my old gmail account has been getting those mails for over a year already. They keep trying to sell me fiber, but unless they also sell me an address is in the US, it’s not very useful.

(Yes, I could unsubscribe. But I prefer to report every opt-out mail like that as spam and also look at what kinds of opt-out spam I get. Some are even more interesting, unsubscribing requires logging into an account I don’t have access to.)


That’s weird. Unsubscribe is supposed to automatically “log you in” with one-click. By law. See CAN-Spam.


Possibly neither EU nor USA senders.


Maybe I misread, but I thought they were referring to the senders being Google Fiber related, so most likely US sender since Google is in the US?


By "Some" I meant different companies sending such no opt-in mails.


I think the revelation is if you offer expensive support (a real person to call, on time pizza or it's free, next day onsite repair, etc), that makes improving the product by fixing bugs easy to justify, because it lowers your support costs.

Of course, you could also lower support costs by having a mostly ignored user support forum.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: