Hacker Newsnew | past | comments | ask | show | jobs | submit | more suvelx's commentslogin

The enclosures are in my experience, garbage and unreliable.

- They struggle with fragmentation

- sometimes, the UEFI/Bios just won't see the disk, not sure why, I'm guessing the enclosure doesn't boot fast enough?

- Sometimes the enclosure just wouldn't read the ISO list. Again, no idea why. Fragmentation maybe?


Homeopathy works!

/s


How many DisplayPort channels do each ports support?

How many PCI-E lanes does each port get?

Do the ports share the channels/lanes?

Do they both support Power Delivery? How many Watts does the thinkpad need to charge?

Even with 'clear' labelling, it can still be a crapshot as to the full capabilities of the device.


Well, this is where we get the phrase "Plug and Pray", isn't it! ;-)

Right now I have an Uptab Mini-DisplayPort adapter in one port, and a StarTech full size DisplayPort adapter in the other. The Uptab adapter has its own USB-C input for power, so that is where the power supply is connected.

The mini and full size DisplayPorts are each connected to a 4K monitor, and I use all three - the two externals and the ThinkPad's WQHD. This all works great when the Uptab is in the rearmost USB-C port and the StarTech is in the other port. But if I switch the two around, one of the monitors goes into low resolution mode!

This kludgy configuration is just because these were the adapters I had on hand. There are a number of adapters that plug into a USB-C port and give you two mini-DisplayPort outputs - I have been meaning to try one out.


There are already companies out there doing 'biometric' analysis of user sessions to discern between authentic, fraudulent and automated sessions, and they're already being applied to things such as loss prevention in financial firms.

I had always assumed that this sort of analysis was already done on the 'slider captchas'. It wouldn't surprise me if this becomes a thing.

Humans are almost never going hit the exact centre of the box, and unless the browser does some smoothing I suspect they never swipe smoothly and horizontally.


Then bot makers will start applying fuzzing to the movements, randomly placing the cursor inside a region over the target, varying the speed, releasing the slider and trying again, etc.

There's no reason yet to go to such lenghts, so the extra effort would be wasted, but as soon as it becomes necessary, someone will do it.


The endgame is probably training some neural net with real user behavior and using that to generate realistic usage patterns.


Then the spammers using that data to create more bots.


Much cheaper to just hire poor people to click boxes.


Even the bot in this blog does fuzzing


Trained human is likely capable of doing it exactly like the Puppeteer does minus the fact that the pointer may move instantly in case of the machine doing it. One wrong assumption covered, two more to go.

The second assumption made here is that the (as pointed out) fuzzing is a thing.

The third one is that you can't be sure of the input device. Joystick like the one seen on ThinkPad is uncommon but still used as the input device. Touchscreens are common but human-ish, though you will have to cover them too. The extreme case is that some X-savvy person may just move the pointer sometimes using the X's built-in capabilities, by dmenu and stuff, and even if that's highly unlikely, it still will fail.

There's no point in doing all the possible checks you may come up with: egde cases, no matter how impossible they look, still exist and break the consistency of the slider solution easily. Google captcha and Funcaptcha are nearly the best options you can readily get, freeing your hands and head of a huge deal of hassle you will face when dealing with this task.


Is there a plan to sell the data as an API?

I am currently working with a client who is exploring the options for building out a similar system.


And/or sell the service where you BYO keyword list and this ranks them. Keyword lists can themselves be scraped from forums and websites in the marked the client is interested. The scraping could also be an add on service.

This would lock in nicely with the keyword research that ahrefs, moz, etc. offer. And other various colour hat seo software.


Selling the data as an API is definitely a potential. I was originally thinking to allow CSV downloads of the data. Would an API be more useful for your client's use case?


You seem to have done most of the hard work, collecting the data. I'm not sure they'd care what format the data was in. Providing it could be expanded to cover the topics their clients need.


Yeah absolutely - topic coverage can be expanded. Shoot me an email and we can chat more: josh at trennd dot co


Sounds great, I'll ping you an email sometime next week!


> yeah, no. Netflix makes “native” apps for a mind-boggling number of platforms, including set-top boxes and game consoles.

I think all of the STB platforms (at least in the UK) are HTML & JS now. I think they used to be flash.


Every example seems to follow this pattern

  client = pymemcache.client.Client(('127.0.0.1', 11211))  #2 create a client

  # save to memcache client, expire in 60 seconds.
  @ring.memcache(client, expire=60)  #3 lru -> memcache
  def get_url(url):
      return requests.get(url).content

How are you supposed to configure the client at 'runtime' instead of 'compile time' (when the code is executed and not when it's imported)?

Careful placement of imports in order to correctly configure something just introduces delicate pain points. It'll work now, but an absent minded import somewhere else later can easily lead to hours of debugging.


   @ring.memcache(client, expire=60)   
   def get_url(url):
       return requests.get(url).content
can be written:

    def get_url(url):
        return requests.get(url).content

    get_url = ring.memcache(client, expire=60)(get_url)
Decorators are just syntactic sugar for that pattern.

You are then welcome to instantiate your ring.memcache object and bind it were it pleases you.

I would have provided a different API though:

   cache = pymemcache.client.Client(('127.0.0.1', 11211))

   @cache.lru(expire=60) # wrapper of ring.cache(client)
   def get_url(url):
       return requests.get(url).content
And accepted the alternative:

   cache = pymemcache.client.Client(conf_factory)

   def get_url(url):
       return requests.get(url).content

   get_url = cache.wraps.lru(get_url, expire=60)
  
It's better to not expect all people to know about the details of decorators just to use your API, and a factory is a nice hook to have anyway: it say where the code for that dynamic configuration should be and code as documentation is the best doc.

Also a patch() context manager would be nice for temporary caching:

   with cache.patch('module.lru', expire=60):
        get_url()
But it's hard to do in a thread safe way to compromised would have to be made.


Although this is true, it is terrible from a developer UX perspective.

Yes, you can "dynamically"-decorate your functions at run-time using whatever global conditionals.

Yes, you can re-decorate the ring decorators.

But you shouldn't have to.

This design is guilty of the cardinal sin of being un-pythonic.


That's what I said. Read again.


You can use a closure to pass in the configuration.

    def configure_memcache(client_ip, port):
        client = pymemcache.client.Client((client_ip, port))
        @ring.memcache(client, expire=60)
        def get_url(url):
            return requests.get(url).content

        return get_url
Then in your code which imports the above library:

    get_url = configure_memcache('127.0.0.1', 11211)
    result = get_url('https://www.google.com')


I'd rather have a sane API.

    def configure_ring():
        if DEBUG:
            return Ring(backend='debug')
        else:
            return Ring(backend='memcache', ...)

    ring = configure_ring()
    
    @ring.cache(expire=60)
    def get_url(...):
        ...
Tons of other libraries out there that implement this exact pattern.


agree


This assumes you define get_url().


This is a good point. asyncio backends now partially take an initializer function because calling await at importing time is a kind of non-sense.

I think it needs to take also a client-configuration or a client initializer. Any advice from your use case?


I have been thinking about setup and teardown for asyncio apps in Python lately.

The async with block is a nice idea but doesn't deal with the reality that often a resource has multiple consumers. For instance, there might be several components of an application that use a database connection -- I really want to make the connection once and tear it down only after all of the clients of that connection have themselves been torn down.

What I'm imagining the answer to be is something a little bit like the Spring Framework but fundamentally centered around asyncio.


I think there's two common situations that a 'compile time' configuration would not support.

- Loading configuration from `main()` e.g. a configuration in via sys.argv and processed by argparse. - Setting configuration within tests. Unless explicitly told otherwise, I'd expect all tests to be performed against an empty cache. Not to mention, there's no guarantee that I'll have access to a server use during tests.


dogpile.cache author here.

The way dogpile does this is that your decorator is configured in terms of a cache region object, which you configure with backend and options at runtime.

https://dogpilecache.sqlalchemy.org/en/latest/usage.html#reg...

I got this general architectural concept from my Java days, observing what EHCache did (that's where the word "region" comes from).


Surely it's just:

   client = pymemcache.client.Client(('127.0.0.1', 11211))
   cache_wrapper = ring.memcache if some_condition else ring.whatever

   @cache_wrapper(...)
   def ...


But if I can enter stuff into the blockchain (And presumably I can, you can't breed pigs by consensus...), why would i care about double-spend?

If I acquire some stolen pigs, or some pigs from a less than ideal lineage, I'll just enter them into the blockchain and say they're the result of breeding.

Blockchain works because it's verifiable. Breeding pigs, digging carbon (in its many forms) out of the ground, turning these things into other things isn't.


I'd presume that entering them into this blockchain would require submitting their biometrics, at which point it'd be indelibly recorded that you had 37 new pigs, 33 of which looked identical to pigs stolen last weekend. Or you could fabricate biometrics, at which point you're fine until you try to sell them and then someone notices that the pigs you have aren't the pigs you own in the system.


It also makes it harder to keep, say, 2000 untaxed pigs, plus the 1000 pigs you pay tax for. Then sell the untaxed as one of the 1000 official pigs, but each official pig gets sold 3 times, each to 3 slaughter houses. If everything is publicly recorded, such as on a blockchain, you can't get away with that as easily.


They're just trying to replicate the real UK supermarket experience, with the convenience of home delivery.

It's a feature, not a bug.


Banks will also introduce imperfections into their UI, and correlate how you interact with it to determine if you're human.

Effectively an invisible captcha.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: