Hacker Newsnew | past | comments | ask | show | jobs | submit | more sameline's commentslogin

We're still working on answering that but we want the user experience to be something that we really like using ourselves. That means that the website will remain ad-free, we won't send email marketing spam and we won't sell user data to advertisers.

Some options we're considering:

1. Offering premium plans that allow tracking more products with more frequent updates.

2. Allow users to opt-in for offers on the specific products they're already tracking.

We're still not sure if we've found product market fit with this idea so we're staying open-minded and trying to avoid getting too attached to any specific monetization tactic.


3. Affiliate programs with online retailers?

After looking at doing something similar, that was the only monetization angle I could find.


This is also worth considering, thanks. We would need to be careful to avoid overwriting any pre-existing affiliate tags set on a user provided URL like Honey was doing [0]. I’m curious why you stopped pursuing the idea if you don’t mind sharing.

[0]: https://www.youtube.com/watch?v=vc4yL3YTwWk


blancotech and I have been working on this together and sharing it with friends and family over the last few months. Looking forward to any feedback on whether this seems like a useful tool!


Eventually one of these comment threads is going to be included in the training set invalidating this as a test.


Which is why knowledge cut off date is important. I prefer if it is frozen to pre-ChatGPT-3.5. Anything post-ChatGPT-3.5 release date should be considered tainted - imagine the sheer number of articles generated by spammers who used ChatGPT.


Knowledge cut-off date doesn't prevent your model from getting tainted though - if you're doing any kind of RLHF, unless all your human reviewers were kept isolated from the world since ${knowledge-cutoff-date}, they will inadvertently give the model glimpses into the future.

It's not immediately apparent to people just how much leakage can happen this way. Up to a year ago, I'd probably give people this story[0] to ponder on, but now it's no longer a hypothetical - GPT-3.5 and GPT-4 are clear, practical demonstrations of just how much knowledge is implicitly encoded in what we say or write, and how this knowledge can be teased out of the input data without any prior context, completely unsupervised, given sufficient time and effort (which in silico translates to "sufficient compute", which we already have).

--

[0] - https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...


That might be fair in the short term. However it's not a workable option long-term, or all such models will be very limited in their knowledge as humanity advances technologically and culturally.


If you want me to be honest with you, LLMs are themselves a short term approach and can get us to, at max, AGI levels (for this current era). I don't see us getting to ASI with just LLMs. For the sort of "emergent ability" that ASI requires it has to be something more "simpler" and the learning be more "virulent" / "instantaneous" (not sure if these words convey what I really want to convey). Otherwise, LLMs will always have a "maxima" at which point it fails. And that maxima is collective intelligence of all of humanity in the current epoch. If you go back a 1000 years, the collective intelligence of all humanity would be completely different (primitive even). Would LLMs trained on that data have produced Knowledge that we know today? I don't think so. It could still, theoretically, reach AGI for that era and accelerate pace of learning by 50-100 years at a time. LLMs will surely accelerate pace of learning (as tools) even now but by themselves won't reach ASI levels. For ASI, we really need something more simpler/fundamental that is yet to be discovered. I don't feel LLMs are the way to ASI. AGI? Yeah possible.


Same is true for humans - a scientist inventing everything from their head would not achieve much, but if they can conduct experiments, and if they persevere, they eventually make discoveries. A pure LLM is like the first case, a LLM with tools or part of a larger system is the second.


My partner and I use anylist.com on iOS to plan each week’s meals. It supports creation of a calendar that you can subscribe to in your calendaring app. That calendar has all-day event for all the meals in a day. Handy to see at a glance what we’ll cook for dinner along with the rest of our evening plans.


flightyapp.com on iOS also supports flight import via email and has calendar integration.


The device emulation tools in Firefox and Chrome let you specify viewport dimensions and allow you to capture a screenshot of the viewport. I like to use this feature to standardize “full window” screenshots. It also excludes the window chrome which is nice.


This looks like UCLA so probably Starship.

https://asucla.ucla.edu/2021/01/27/asucla-restaurants-brings...


Yes, it’s also Starship at my campus.


Allowing remote images to load allows for a tracking signal to be sent from the email client. Blocking images allows the user to choose whether that signal is sent.


Ohh I see. Like a Facebook Pixels tracking image. No, I don't think that's blocked. Then again, it's not blocked in most email clients, perhaps it's difficult to accomplish.

Nonetheless, I've added it to the list of upcoming features as I believe it's worth a shot!

Thank you so much for your feedback :)


There is an option to block it in Gmail/Inbox.


That is excellent news.

I will look into that. It does not sound like it will be too tricky to implement so expect a quick turn around time :)


I believe there are a couple of approaches to this with varying amount of data storage, You can download all images immediately as the email is received (thus preventing image tracking as every image would then be clicked immediately) or have the ability to block all images and selectively turn them on when necessary (unless the image is inline etc).


Thank you so much for your suggestion. This is gold.

I really like your approach to solving the problem.

I have taken note of your suggestions and will look into this further asap :)


Cool - not saying you have to do it like them, but the most popular option there is to block all images globally but allow the person to click a button to unblock images for that specific email, which then enables a dialogue to "always unblock images from this sender."

I like that workflow; may be something to consider. I block all images but allow images from known senders, like my parents or co-workers.


Thank you so much for your feedback.

This is a great approach to achieving our goal.

I have taken note of your suggestions and will look into this asap :)


Thunderbird has the feature. I almost never need to look at in image in an email, unless it's a photo from a friend.


Fantastic, I have an example to learn from :D

Thank you for the tip!


If a business cannot afford to properly handle and audit customer data then it should avoid any sort of collection. Businesses that produce value from customer data should be able to pay for necessary protections.


It’s not unreasonable to imagine that any of the examples provided in the source post could require a team larger than 10 people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: