We use paraquat quite a bit in a way that we never actually exposed to the chemical. I imagine our procedures are quite different to how it was handled even 10 or 20 years ago.
I had an idea to sort of automate this process or even a user aided wizard as a product but never to round to the mvp. Still an ok idea I think as satellite imagery on wall is great talking point. It can be beautiful, dramatic, always changing so makes for sets of images showing changes etc
No till farming is widespread especially in Australia for decades. Nearly all dryland farms take this approach. It just means controlling fallow weeds without tillage, usually with herbicide. Then plant straight through the previous crop residue. This preserves moisture in the soil and the soil structure. Planters mostly do this fine. I'm not sure what has not yet been invented elsewhere.
I love the enthusiasm, but is this another Google thing that is for researchers only? Yes fantastic technology etc, but say you develop something on the infrastructure then go to commercialise, what do you do?
I don't know much about the ML space, but is this a bit like Google Earth Engine, amazing tech, very generous resources free for researchers and development but cannot be ported elsewhere so to commercialise you then are limited to this very environment which is not cheap. I recently reached out to Google for pricing on GEE, 3 weeks later I got a response. 3 weeks.
> So we're talking about a group of people who are the polar opposite of any Google support experience you may have had.
> Ever struggle with GCP support? They took two weeks to resolve my problem. During the whole process, I vividly remember feeling like, "They don't quite seem to understand what I'm saying... I'm not sure whether to be worried."
> Ever experience TFRC support? I've been a member for almost two years. I just counted how many times they failed to come through for me: zero times. And as far as I can remember, it took less than 48 hours to resolve whatever issue I was facing.
> For a Google project, this was somewhere between "space aliens" and "narnia" on the Scale of Surprising Things.
[...]
> My goal here is to finally put to rest this feeling that everyone has. There's some kind of reluctance to apply to TFRC. People always end up asking stuff like this:
> "I'm just a university student, not an established researcher. Should I apply?"
> Yes!
> "I'm just here to play around a bit with TPUs. I don't have any idea what I'm doing, but I'll poke around a bit and see what's up. Should I apply?"
> Heck yeah!
> "I have a Serious Research Project in mind. I'd like to evaluate whether the Cloud TPU VM platform is sufficient for our team's research goals. Should I apply?"
> Absolutely. But whoever you are, you've probably applied by now. Because everyone is realizing that TFRC is how you accomplish your research goals.
I expect that if you apply, you'll get your activation email within a few hours. Of course, you better get in quick. My goal here was to cause a stampede. Right now, in my experience, you'll be up and running by tomorrow. But if ten thousand people show up from HN, I don't know if that will remain true. :)
I feel a bit bad to be talking at length at TFRC. But then I remembered that none of this is off-topic in the slightest. GPT-J was proof of everything above. No TFRC, no GPT-J. The whole reason that the world can enjoy GPT-J now is because anyone can show up and start doing as many effective things as you can possibly learn.
It was all thanks to TFRC, the Cloud TPU team, the JAX team, the XLA compiler team -- hundreds of people, who have all managed to gift us this amazing opportunity. Yes, they want to win the ML mindshare war. But they know the way to win it is to care deeply about helping you achieve every one of your research goals.
Think of it like a side hobby. Best part is, it's free. (Just watch out for the egress bandwidth, ha. Otherwise you'll be talking with GCP support for your $500 refund -- and yes, that's an unpleasant experience.)
What are suggested online courses to learn about multi variable time series forecasting? My skill level is - ok with university level Biometrics but that was 10+ years ago, and I am web/self-taught python for web apps and automating GIS tasks.
Good question. I've been working on this too iterating through Youtube and Medium tutorials and working through all the notebooks I can find. The best examples I've found use LSTM for deep learning and vector autoregression (VAR) for classical statistical forecasting.
Also I'd like to be able to read the reddit comment without having to leave the site if possible. Am liking it though, already been down a few rabbit holes. Good stuff.
thanks for the suggestion, I am adding a feature to expand the comment in the view itself and user will be able to read the full comment without leaving the site.
Hey nice idea- I am looking at building a dashboard style web app like yours. Have you used any frameworks here or rolled your own? - I am on mobile or id take a closer look myself.
I am developing a big data collection and visualization platform called QuantaleCore that powers https://quantale.io
The platform allows users to code and launch a container which gets data into the DB and from there everything else is abstracted. There is a generic API that can be used to request that data from DB without writing any code and then on frontend I am developing a library that helps me hook these generic endpoints to different visualizations that I want just like kibana or grafana.
So QuantaleCore is like an end to end replacement for Elasticsearch + Logstash + Kibana
My email is on my profile. Email me if you would like to discuss it more. I would be happy to give you a demo.
PostGIS can write to GeoJSON so you can have a simple flask or bottlepy webserver to query the db. Then use leaflet GeoJSON layer to pull down the data and render it on map.
There is of course the newer vector tiles which is a better solution in many situations but a bit more complicated to get going. Crunchy Data have made some good examples and a nice tile server for this.
PostGIS raster is really interesting to me but the general sentiment is that it's a bit slow and only use it if you need to. It just seems so powerful to combine raster and vector with SQL. What would the alternative be for raster query vertical and horizontal combined with vector? GRASSGIS Timeseries tools look interesting but I've not tried those. Any other ideas?
I remain pretty leery of raster in the database, for the reasons you list, but the power of GDAL to reach out over networks for raster access makes me a little more excited about some raster/vector use cases. It's not a panacea, but it's something.
I read that blog post. Such a good example. But there is even a comment in there warning about raster via database. I am going to try it out as well with some MODIS data. I wonder what needs to be done to make raster more performant in POSTGIS?
There's some work to be done to make all functions parallelizeable. After that it gets harder. For large scale raster analysis, moving the processing closer to the data becomes paramount. I feel like purpose-built raster processing will always be better. Maybe something with FDW or other client/service architecture, where the database sends off the request and the bulk processing happens close to the data. Also, the row-based model of SQL sort of breaks down against raster where the model expects much much larger objects and presenting the tiled-up model to users just adds complexity without getting back performance or flexibility in return.