My only gripe with the game is that healing doesn't give XP to the healing units. This means you need to place them in combat to level up instead of placing them behind the fighters like they are intended to be, and with them initially having low health they are very squishy. I know you can kinda cheese it by reducing a monster to 1-2 HP and then getting them to attack, but it feels like going against their role.
> It is felt that allowing units to gain experience without risk would make leveling-up of such units inevitable. Further, one of the motivating examples of this is so that units such as shaman can have a hope to level up in multiplayer. It is pointed out that if the experience gains were high enough to allow shaman to level up in a single multiplayer game, then it would be trivial to gain the best type of healing unit in a campaign very quickly.
It depends. Personally I think they should make alternatives more easy. For instance, under Options, for people to pick other ways to level up. Does not have to be 1000000 ways, just, say 3-5 ways in total, first one being the main default and only the main default is kept balanced, the rest can be unbalanced, just allowed per option as-is.
I've enjoyed this, honestly. There's a whole short-term pain/long-term gain tradeoff to risking healers that adds more strategy to the campaign.
> I know you can kinda cheese it by reducing a monster to 1-2 HP
In practice, I've found it difficult to get monsters to 1-2 HP since it often means not using your most powerful attacks. On harder difficulties I usually can't afford the opportunity cost.
Yeah I personally found this to be a big part of the tactical and strategic challenge. It reminded me a lot of Pokemon where you have a similar challenge, of slotting "exposure to fighting" into a limited action and HP budget.
Edit: Now that I think about it, most turn-based games have this mechanic. It's almost an idiomatic balance/design decision in gaming.
Compare to Dota where support heroes have acquired more and more opportunities for assist gold/XP, it does in some sense make the game "easier" for the support players, but then the game is harder in other ways because now the supports are all way more farmed and dangerous than in older versions. It's the difference between controlling an army of many units and having to manage them all, versus controlling one unit and needing to work together within a team.
gloomhaven has a reasonable solution to this - some heals give experience, others dont, and often times gaining experience comes with giving up other resources
There's actually an add-on for this (and a bunch of other things too): Advance Wesnoth Wars. It adds options for XP-for-healing but also for making terrain affect damage taken rather than chance to hit (which is fun when you get tired of the randomness).
This fork[0] of a fork of the original allegedly works with 1.18. I haven't tested it, because these days I play with the built-in predictable RNG and it suffices for me.
Essentially it makes it so that if your attack is 4 swings attacking a unit on 50% defense territory, it will hit twice and miss twice every time. Any remainders are dealt with randomly, but the seed is based on the save, so it removes save-scumming (which has always tempted me, and I don't like).
Isn't the strategy then to keep them behind the fighter units, wait until an enemy is 1 HP away from death, make the healer advance and make a kill, then put a fighter in front of them again?
At the more difficult levels it is really easy to get to a point where your strategy relies on a high probability event (a healer landing just one hit on a low HP unit) and then losing because it went against you. Which is frustrating, even if completely expected.
An indirect compensation is that these units require less XP to advance, but I understand your concern too.
> I know you can kinda cheese it by reducing a monster to 1-2 HP and then getting them to attack, but it feels like going against their role.
This is a problem with the XP-per-kill system. Wesnoth could use a variant instead. I use the above strategy all the time to level the healers up.
Note that elven units have slow (their healers), which is very powerful in its own right for getting a kill on a unit. First slow, then deal damage with other units.
For llama-server (and possibly other similar applications) you can specify the number of GPU layers (e.g. `--n-gpu-layers`). By default this is set to run the entire model in VRAM, but you can set it to something like 64 or 32 to get it to use less VRAM. This trades speed as it will need to swap layers in and out of VRAM as it runs, but allows you to run a larger model, larger context, or additional models.
That search (e.g. `ford fiesta water pump`) is consistent for me as well, except for an "explore more" section in the middle of the results.
So it does seem to be specific searches where it gives up after the first 7-10 results (or decides to show you some more related results after 20-30 additional unrelated results).
I wonder if this is algorithmic. E.g. people searching for a specific "how to replace/fix ..." are not going to click on results from their recommended feed, so the algorithm could have learned to keep those results fixed. However, someone looking for a piece of entertainment (trailer, book review, etc.) may be more inclined to click on other unrelated content, so those searches are more inclined to show results from the user's recommended feed.
I just did the first one (`dune book review`) and the results were good. Like 12 relevant results, 5 shorts in one row (also relevant), followed by many more relevant results.
There are several issues with "Batteries Included" ecosystems (like Python, C#/.NET, and Java):
1. They are not going to include everything. This includes things like new file formats.
2. They are going to be out of date whenever a standard changes (HTML, etc.), application changes (e.g. SQLite/PostgreSQL/etc. for SQL/ORM bindings), or API changes (DirectX, Vulcan, etc.).
3. Things like data structures, graphics APIs, etc. will have performance characteristics that may be different to your use case.
4. They can't cover all nice use cases such as the different libraries and frameworks for creating games of different genres.
For example, Python's XML DOM implementation only implements a subset of XPath and doesn't support parsing HTML.
The fact that Python, Java, and .NET have large library ecosystems proves that even if you have a "Batteries Included" approach there will always be other things to add.
"Batteries included" means "ossification is guaranteed", yah. "stdlib is where code goes to die" is a fairly common phrase for a reason.
There's clearly merit to both sides, but personally I think a major underlying cause is that libraries are trusted. Obviously that doesn't match reality. We desperately need a permission system for libraries, it's far harder to sneak stuff in when doing so requires an "adds dangerous permission" change approval.
> "Batteries included" means "ossification is guaranteed", yah. "stdlib is where code goes to die" is a fairly common phrase for a reason.
Except I rather have ossified batteries that solve my problem, even if not as convinient as more modern alternatives, than not having them at all on a given platform.
But also everyone sane avoids the built-in http client in any production setting because it has rather severe footguns and complicated (and limited) ability to control it. It can't be fixed in-place due to its API design... and there is no replacement at this point. The closest we got was adding some support for using a Context, with a rather obtuse API (which is now part of the footgunnery).
There's also a v2 of the json package because v1 is similarly full of footguns and lack of reasonable control. The list of quirks to maintain in v2's backport of v1's API in https://github.com/golang/go/issues/71497 (or a smaller overview here: https://go.dev/blog/jsonv2-exp) is quite large and generally very surprising to people. The good news here is that it actually is possible to upgrade v1 "in place" and share the code.
There's a rather large list of such things. And that's in a language that has been doing a relatively good job. In some languages you end up with Perl/Raku or Python 2/3 "it's nearly a different language and the ecosystem is split for many years" outcomes, but Go is nowhere near that.
Because this stuff is in the stdlib, it has taken several years to even discuss a concrete upgrade. For stuff that isn't, ecosystems generally shift rather quickly when a clearly-better library appears, in part because it's a (relatively) level playing field.
This looks like an ad for batteries included to me.
Libraries also don't get it right the first time so they increment minor and major versions.
Then why is it not okay for built-in standard libraries to version their functionality also? Just like Go did with JSON?
The benefits are worth it judging by how ubiquitous Go, Java and .NET are.
I'd rather leverage billions of support paid by the likes of Google, Oracle and Microsoft to build libraries for me than some random low bus factor person, prone to be hacked at anytime due to bad security practices.
Setting up a large JavaScript or Rust project is like giving 300 random people on the internet permission to execute code on my machine. Unless I audit every library update (spoiler: no one does it because it's expensive).
Libraries don't get it right the first time, but there are often multiple competing libraries which allows more experimentation and finding the right abstraction faster.
Third party libraries have been avoiding those json footguns (and significantly improving performance) for well over a decade before stdlib got it. Same with logging. And it's looking like it will be over two decades for an even slightly reasonable http client.
Stuff outside stdlib can, and almost always does, improve at an incomparably faster rate.
And I think the Go people seem to do a fairly good job of picking out the best and most universal ideas from these outside efforts and folding them in.
.NET's JSON and their Kestrel HTTP server beg to differ.
Their JSON even does cross-platform SIMD and their Kestrel stack was top 10/20 on techempower benchmarks for a while without the ugly hacks other frameworks/libs use to get there.
stdlib is the science of good enough and sometimes it's far above good enough.
For me, the v2 re-writes, as well as the "x" semi-official repo are a major strength. They tell me there is a trustworthy team working on this stuff, but obviously not everything will always be as great as you might want, but the floor is rising.
yea, I like the /x/ repos a fair bit. "first-party but unstable" is an extremely useful area to have, and many languages miss it by only having "first-party stable forever" and "third party". you need an experimentation ground to get good ideas and seek feedback, and keeping it as a completely normal library allows people/the ecosystem to choose versions the same way as any other library.
Another downside of a large stdlib, is that it can be very confusing. Took my a while how unicode is supposed to work in go, as you have to track down throughout the APIs what are the right things to use. Which is even more annoying because the support is strictly binary and buried everywhere without being super explicit or discoverable.
I'm not sure I understand. Why would a standard library, a collection of what would otherwise be a bunch of independent libraries, bundled together, be more confusing than the same (or probably more) independent libraries published on their own?
100% to libraries having permissions. If I'm using some code to say compute a hash of a byte array, it should not have access to say the filesystem nor network.
please! nobody uses Xpath (coz json killed XML), it RDF (semantic web never happened, and one ever 10years is not fast), schema.org (again, nobody cares), PNG: no change in the last 26 years, not fast. the HTML "living standard" :D completely optional and hence not a standard but definition.
XPath 1.0 is a pain to write queries for. XPath 2.0 adds features that make it easier to write queries. XPath 3.1 adds support for maps, arrays, and JSON.
And the default Python XPath support is severely limited, not even a full 1.0 implementation. You can't use the Python XPath support to do things like `element[contains(@attribute, 'value')]` so you need to include an external library to implement XPath.
The problem of XPath 3.1 is that it is very complex [1] - this is a long page. Compared to the 1.0 spec, it is just too complex in my view. For the open source project I'm working on, I never felt that the "old" XPath version is painful.
I think simplicity is better. That's why Json is used nowadays, and XML is not.
XPath is used in processing XML (JATS and other publishing/standards XML files) and can be used to proces HTML content.
RDF and the related standards are still used in some areas. If the "Batteries Included" standard library ignores these then those standards will need an external library to support them.
Schema.org is used by Google and other search engines to describe content on the page such as breadcrumbs, publications, paywalled content, cinema screenings, etc. If you are generating websites then you need to produce schema.org metadata to improve the SEO.
Did you notice that a new PNG standard was released in 2025 (last year, with a working draft in 2022) adding support for APNG, HDR, and Exif metadata? Yes, it hasn't changed frequently, but it does change. So if you have PNG support in the standard library you need to update it to support those changes.
And if HTML support is optional then you will need an external library to support it. Hence a "Batteries Included" standard library being incomplete.
comparing to Node, .NET is batteries included: built-in Linq vs needing lodash external package, built-in Decimal vs decimal.js package, built-in model validation vs class-validator & class-transformer packages, built-in CSRF/XSRF protection vs csrf-csrf package, I can go on for a while...
That's my point. You can have a large standard library like those languages I mentioned, but that isn't going to include everything nor cover every use case, so you'll have external libraries (via PyPi for Python, NuGet for .NET, and Maven for Java/JVM).
depends, JavaScript in the Browser has many useful things available, which I miss with python, e.g., fetch, which in Python you need a separate package like requests to avoid a clunky API. Java had this issue for long time as well, since Java 11 there is the HttpClient with a convenient API.
The question is really about where the boundary between presentation (CSS) and interactivity (JavaScript) lies.
For static content like documents the distinction is easy to determine. When you think about applications, widgets, and other interactive elements the line starts to blur.
Before things like flex layout, positioning content with a 100% height was hard, resulting in JavaScript being used for layout and positioning.
Positioning a dropdown menu, tooltip, or other content required JavaScript. Now you can specify the anchor position of the element via CSS properties. Determining which anchor position to use also required JavaScript, but with things like if() can now be done directly in CSS.
Implementing disclosure elements had to be done with a mix of JavaScript and CSS. Now you can use the details/summary elements and CSS to style the open/close states.
Animation effects when opening an element, on hover, etc. such as easing in colour transitions can easily be done in CSS now. Plus, with the reduced motion media query you can gate those effects to that user preference in CSS.
So people who play games for a living are not adults? There are many people who create videos in Minecraft with complex builds, drawing inspiration from things like architecture, nature, etc.
And there are many adults who play video games to unwind after work.
And it's not just men who play video games. There are a lot of women who play video games including Minecraft and other games, including a huge range of more casual games.
Wine has a lot of tests that are run across platforms to check conformance -- https://test.winehq.org/data/. These are a large part of why it has good compatibility.
With this exact point in mind: I've recently written a pretty straight forward win32 c implementation of a utility with some context dependent window interactions and a tray icon to help monitor and facility reload of config file.
Is there any way I can use the Wine project to facilitate this compiling and running straight under x11/linux environment as a integrated project that doesn't require the end user to fiddle with Wine? I don't mind bundling shared code as needed. Help appreciated, I tried hard and failed at this endeavour priorly.
> Is there any way I can use the Wine project to facilitate this compiling and running straight under x11/linux environment as a integrated project that doesn't require the end user to fiddle with Wine? I don't mind bundling shared code as needed. Help appreciated, I tried hard and failed at this endeavour priorly.
Yep. that's the route I tried before, no good, maybe it's just that the documentation is past it's sell by date, maybe it's lack of community use.. I'm just not seeing it. Even the article itself describes how to make an exe file... that will then work in Linux? Or is it simply a program that's easier to run on Wine? Loads of text with unclear details throughout it.
The idea behind a greenscreen is that you can make that green colour transparent in the frames of footage allowing you to blend that with some other background or other layered footage. This has issues like not always having a uniform colour, difficulty with things like hair, and lighting affecting some edges. These have to be manually cleaned up frame-by-frame, which takes a lot of time that is mostly busy work.
An alternative approach (such as that used by the sodium lighting on Mary Poppins) is that you create two images per frame -- the core image and a mask. The mask is a black and white image where the white pixels are the pixels to keep and the black pixels the ones to discard. Shades of gray indicate blended pixels.
For the mask approach you are filming a perfect alpha channel to apply to the footage that doesn't have the issues of greenscreen. The problem is that this requires specialist, licensed equipment and perfect filming conditions.
The new approach is to take advantage of image/video models to train a model that can produce the alpha channel mask for a given frame (and thus an entire recording) when just given greenscreen footage.
The use of CGI in the training data allows the input image and mask to be perfect without having to spend hundreds of hours creating that data. It's also easier to modify and create variations to test different cases such as reflective or soft edges.
Thus, you have the greenscreen input footage, the expected processed output and alpha channel mask. You can then apply traditional neural net training techniques on the data using the expected image/alpha channel as the target. For example, you can compute the difference on each of the alpha channel output neurons from the expected result, then apply backpropagation to compute the differences through the neural network, and then nudge the neuron weights in the computed gradient direction. Repeat that process across a distribution of the test images over multiple passes until the network no longer changes significantly between passes.
reply