Cool post. I'd be interested in seeing models likes this deployed to the satellites themselves.
Typically, data gathered from satellites needs to wait for the satellite to do a pass over a dedicated ground station before it can be processed, which is probably somewhere in the US. If you move the processing from the ground station to the satellite, then you 1. Don't have to transmit as much data, 2. Can transmit actionable intelligence much faster. It can be upwards of 90 minutes before a satellite passes over it's ground station. If you could get that down to a few seconds, I could see some serious applications in disaster response.
> It can be upwards of 90 minutes before a satellite passes over it's ground station.
Planet Pelican will have the ability to communicate with other satellites, meaning you don't necessarily need a ground station: https://www.planet.com/products/pelican/
> 1. Don't have to transmit as much data
There is definitely a bit of a move to do some work on the actual satellite, depending on the use case. This is pretty doable if you have a very specific use case for which you can optimize this, but gets a little bit harder if you have to have it work for every possible use case.
From discussions I've seen online, a lot can fail on satellites so there is a bias to do as little as possible on them. There does appear to be plenty of bandwidth to transmit back to ground stations.
The real win will be satellite-to-satellite transmissions where any data collected by the constellation is passed to the satellite that'll next fly over a ground station. This will lower the time from capture to analysis considerably. The fresher the data, the more valuable it is.
It’s possible I’m missing visibility into some part of the industry, but I don’t think this has been true for quite some time. There are multiple providers of Ground Segment as a Service that satellite operators can buy radio time from across the globe that are in the billions of yearly revenue. Most satellites are transmitting to the nearest ground stations in one or more networks live and transported over IP unless that capability isn’t required.
I think it's highly dependent on the significance of the memento. I have physical credentials I received for a major athletic event I competed in, and never in my life would I consider taking a picture of them and throwing them away.
It doesn't sound like you were all that attached to the coffee mugs to begin with, in which case digitizing them was a good move.
I'd imagine this is part of the reason why my parents still keep my terrible first grade art pieces framed on their desks.
It’s likely they know someone they’ve effectively already offered the job to, but equal opportunity laws require a job posting to be made. So while they are technically “hiring”, the job posting is fairly meaningless.
There are easier ways to address equal opportunity requirements (really just immigration stuff)
More often you just want a way for extremely strong tactical inbound candidates to find you. Even if you're not desperate to fill the role, you want strong candidates to find you, reach out, and then work with them to find a role to fit the candidate.
Blue Origin's New Glenn achieves successful booster recovery and payload to orbit delivery. Initially this is seen as huge win and they emerge as potential challengers to SpaceX, however manufacturing issues with BE-4 engines and questions about the design choice of New Glenn's massive 7m fairing (when satellites are converging towards smaller, more compact designs) will cause speculation for their product market fit.
Author here: I should clarify the satellite is not running Windows. Instead, it’s running its own custom OS written in C called Flight Software (FSW) specifically designed for the satellite onboard computer.
Re-reading the post, I see how the title, my analogies, and poor attempts at humor would give the incorrect description of what’s happening with the satellite when it enters safemode. I’ll amend the post soon.
Thanks for the feedback, I’ll be better next time.
Could I ask you to clarify why avoiding safemode is so important? In a non satellite system safemode means everything is driven to a safe state which is fine during testing in the lab.
Also do you not run these tests in an even more simulated environment where there is only the flight computer and no real hardware at all?
Having discussed this same question with the more experienced members of my team, the only conclusion I can draw is that the customer (US Government) is incredibly risk averse. Any unexpected entry into safemode would require a report, multiple meetings with the customer, and them being pretty angry. Their line of reasoning seems to be "Safemode->Something is wrong->Why is something wrong? We're not paying you to be wrong". I'm personally of the opinion that safemode isn't that bad. It's fully recoverable and shows the system is working properly.
We normally have a Functional Test Assembly (real computer and some other hardware for testing) to run our tests against, but we only have one setup and it is consistently unreliable. This particular CLT was unable to get a clean run in the lab but it was decided that the issues were related to the lab setup rather than the actual test, so we moved forward to run on the satellite (against our team's protests).
This to me is the real crux of the issue: if we can't even trust our own testing environment, what's the point of having it at all? If the customer is so risk averse, why would we take this chance? Needless to say, I don't think we'll be running anything on the satellite without full FTA vetting anytime in the near future.
> Any unexpected entry into safemode would require a report, multiple meetings with the customer, and them being pretty angry. Their line of reasoning seems to be "Safemode->Something is wrong->Why is something wrong? We're not paying you to be wrong". I'm personally of the opinion that safemode isn't that bad. It's fully recoverable and shows the system is working properly.
To the last part first: Good that safe mode kicked in and did the right thing, but now what? What caused it to enter safe mode in the first place?
That's why they care when it happens. If they don't know why it's entering safe mode, they can't correct the actual problems in the system.
"Safemode is when all non critical functions are automatically shut down and the satellite becomes entirely focused on generating power by pointing its solar panels towards the Sun and trying to reestablish any communication that was lost."
The non-critical functions are all the things the customer actually bought the satellite for. Cool that it's still alive, but now the Space Internet / death lasers / etc. are offline.
There are faults IDs that trip if certain telemetry goes outside of a normal range. If a safemode were to occur, we would investigate which faults tripped and at what time, and use those to construct a "story" of what happened on the satellite before it entered safemode. We're also constantly recording every telemetry that comes down, so we could reference any telemetry we wanted as far back as months in the past.
To your point, yes you're correct. The cause of the safemode is much more interesting than the fact we entered it.
> We normally have a Functional Test Assembly (real computer and some other hardware for testing) to run our tests against, but we only have one setup and it is consistently unreliable
Its interesting to see that someone with a 2B budget have the same problem as someome with 5 million budget... we have an engineering model for our cubesats but its flaky
I enjoyed the humour, and the content. Personally I wouldn’t change it - it’s kind of a click-bait title, but I never would have read the article if it had a boring title, and I am glad I read it.
Can you speak at all as to how the development on this software is done? Is it distributed with centralized version control? Does release and engineering process interact with the version control at all? Are there mechanisms that link defect reports, corrections, and sign offs back to version control and into the build system?
I got lost recently in how the Shuttle software was managed, mostly through IBM mainframes, and z/OSs facilities for all the above. I'm curious how modern development looks in comparison.
> I got lost recently in how the Shuttle software was managed, mostly through IBM mainframes, and z/OSs facilities for all the above. I'm curious how modern development looks in comparison.
Do you have any references for this? I also recently went down a research rabbit hole of the history of computing on Earth and in space - super interesting stuff. And the parallels are quite obvious when you look at it.
> And the parallels are quite obvious when you look at it.
The insane level of detail and strategy when writing the shuttle software is something to behold. The testing laboratory SAIL was a full scale orbiter that actually flew test missions. "Day of use I-Loads" are one of my favorite things. They couldn't change the software load, but they could move some constants around before launch, really useful for feeding wind data into the shuttle before it launched.
FSW development is done by a different team than mine but I believe it's just managed through gitlab. Releases are done through tags, and any updates that need to be made have tickets created for them and are developed by the FSW team. Final approval is given by certified product engineers and then a new tag is created for that release.
Like I said this is a different team but from what I've seen the process is fairly modern given how old our hardware is. I'm not sure of the exact process of how it's loaded onto the satellite through.
Technical blog pro tip: Assume that many of your readers are VERY literal-minded, and many of your other readers like their humor obscure and as deadpan as possible. Sorry.
Exoplanet detection is a very precise measurement. The two main methods we have (doppler shift of host star from the planet's orbit and the transit of the planet across the star) are not exhaustive by any means and biased towards discovering large planets that have short orbital periods. The fact that more Earth sized planets are being discovered and could serve as targets of analysis by JWST is an exciting prospect and I can't wait to see its development!
Typically, data gathered from satellites needs to wait for the satellite to do a pass over a dedicated ground station before it can be processed, which is probably somewhere in the US. If you move the processing from the ground station to the satellite, then you 1. Don't have to transmit as much data, 2. Can transmit actionable intelligence much faster. It can be upwards of 90 minutes before a satellite passes over it's ground station. If you could get that down to a few seconds, I could see some serious applications in disaster response.