Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the tech just looks at the screenshot image, how does it have near full coverage of mobile web but has to train/fingerprint apps?


Good question- with Safari, we use the visible domain to help score the confidence for mobile websites. For native apps, we don't have any such "easy" confidence boost. So we have to fingerprint the app based on different features present in the image that is captured.

The system is computer vision / machine learning based, so even on novel sites, it will get better over time with more usage and training. We've trained it up for a bunch of the most popular sites already though.

Does this make sense?


Sounds impressive. Do you have a pre-existing product database that you match them up to or do you just create products on the fly?


I would strongly guess it's a fixed set of product images they're training against, possibly attained by massive scraping. Another part of training or processing might consist of a reverse image search API, like TinEye, and gathering metadata from the pages containing the result images.


We do our own searching and wrote our own crawlers. Does that answer your question?


It does indeed. I'm very impressed. Keep up the good work!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: