Well, this is where we get the phrase "Plug and Pray", isn't it! ;-)
Right now I have an Uptab Mini-DisplayPort adapter in one port, and a StarTech full size DisplayPort adapter in the other. The Uptab adapter has its own USB-C input for power, so that is where the power supply is connected.
The mini and full size DisplayPorts are each connected to a 4K monitor, and I use all three - the two externals and the ThinkPad's WQHD. This all works great when the Uptab is in the rearmost USB-C port and the StarTech is in the other port. But if I switch the two around, one of the monitors goes into low resolution mode!
This kludgy configuration is just because these were the adapters I had on hand. There are a number of adapters that plug into a USB-C port and give you two mini-DisplayPort outputs - I have been meaning to try one out.
There are already companies out there doing 'biometric' analysis of user sessions to discern between authentic, fraudulent and automated sessions, and they're already being applied to things such as loss prevention in financial firms.
I had always assumed that this sort of analysis was already done on the 'slider captchas'. It wouldn't surprise me if this becomes a thing.
Humans are almost never going hit the exact centre of the box, and unless the browser does some smoothing I suspect they never swipe smoothly and horizontally.
Then bot makers will start applying fuzzing to the movements, randomly placing the cursor inside a region over the target, varying the speed, releasing the slider and trying again, etc.
There's no reason yet to go to such lenghts, so the extra effort would be wasted, but as soon as it becomes necessary, someone will do it.
Trained human is likely capable of doing it exactly like the Puppeteer does minus the fact that the pointer may move instantly in case of the machine doing it. One wrong assumption covered, two more to go.
The second assumption made here is that the (as pointed out) fuzzing is a thing.
The third one is that you can't be sure of the input device. Joystick like the one seen on ThinkPad is uncommon but still used as the input device. Touchscreens are common but human-ish, though you will have to cover them too. The extreme case is that some X-savvy person may just move the pointer sometimes using the X's built-in capabilities, by dmenu and stuff, and even if that's highly unlikely, it still will fail.
There's no point in doing all the possible checks you may come up with: egde cases, no matter how impossible they look, still exist and break the consistency of the slider solution easily. Google captcha and Funcaptcha are nearly the best options you can readily get, freeing your hands and head of a huge deal of hassle you will face when dealing with this task.
And/or sell the service where you BYO keyword list and this ranks them. Keyword lists can themselves be scraped from forums and websites in the marked the client is interested. The scraping could also be an add on service.
This would lock in nicely with the keyword research that ahrefs, moz, etc. offer. And other various colour hat seo software.
Selling the data as an API is definitely a potential. I was originally thinking to allow CSV downloads of the data. Would an API be more useful for your client's use case?
You seem to have done most of the hard work, collecting the data. I'm not sure they'd care what format the data was in. Providing it could be expanded to cover the topics their clients need.
client = pymemcache.client.Client(('127.0.0.1', 11211)) #2 create a client
# save to memcache client, expire in 60 seconds.
@ring.memcache(client, expire=60) #3 lru -> memcache
def get_url(url):
return requests.get(url).content
How are you supposed to configure the client at 'runtime' instead of 'compile time' (when the code is executed and not when it's imported)?
Careful placement of imports in order to correctly configure something just introduces delicate pain points. It'll work now, but an absent minded import somewhere else later can easily lead to hours of debugging.
It's better to not expect all people to know about the details of decorators just to use your API, and a factory is a nice hook to have anyway: it say where the code for that dynamic configuration should be and code as documentation is the best doc.
Also a patch() context manager would be nice for temporary caching:
with cache.patch('module.lru', expire=60):
get_url()
But it's hard to do in a thread safe way to compromised would have to be made.
I have been thinking about setup and teardown for asyncio apps in Python lately.
The async with block is a nice idea but doesn't deal with the reality that often a resource has multiple consumers. For instance, there might be several components of an application that use a database connection -- I really want to make the connection once and tear it down only after all of the clients of that connection have themselves been torn down.
What I'm imagining the answer to be is something a little bit like the Spring Framework but fundamentally centered around asyncio.
I think there's two common situations that a 'compile time' configuration would not support.
- Loading configuration from `main()` e.g. a configuration in via sys.argv and processed by argparse.
- Setting configuration within tests. Unless explicitly told otherwise, I'd expect all tests to be performed against an empty cache. Not to mention, there's no guarantee that I'll have access to a server use during tests.
The way dogpile does this is that your decorator is configured in terms of a cache region object, which you configure with backend and options at runtime.
But if I can enter stuff into the blockchain (And presumably I can, you can't breed pigs by consensus...), why would i care about double-spend?
If I acquire some stolen pigs, or some pigs from a less than ideal lineage, I'll just enter them into the blockchain and say they're the result of breeding.
Blockchain works because it's verifiable. Breeding pigs, digging carbon (in its many forms) out of the ground, turning these things into other things isn't.
I'd presume that entering them into this blockchain would require submitting their biometrics, at which point it'd be indelibly recorded that you had 37 new pigs, 33 of which looked identical to pigs stolen last weekend. Or you could fabricate biometrics, at which point you're fine until you try to sell them and then someone notices that the pigs you have aren't the pigs you own in the system.
It also makes it harder to keep, say, 2000 untaxed pigs, plus the 1000 pigs you pay tax for. Then sell the untaxed as one of the 1000 official pigs, but each official pig gets sold 3 times, each to 3 slaughter houses. If everything is publicly recorded, such as on a blockchain, you can't get away with that as easily.
- They struggle with fragmentation
- sometimes, the UEFI/Bios just won't see the disk, not sure why, I'm guessing the enclosure doesn't boot fast enough?
- Sometimes the enclosure just wouldn't read the ISO list. Again, no idea why. Fragmentation maybe?