Welcome to ned Productions

by . Last updated .

Welcome to ned Productions (non-commercial personal website, for commercial company see ned Productions Limited). Please choose an item you are interested in on the left hand side, or continue down for Niall’s virtual diary.

Niall’s virtual diary:

Started all the way back in 1998 when there was no word “blog” yet, hence “virtual diary”.

Original content has undergone multiple conversions Microsoft FrontPage => Microsoft Expression Web, legacy HTML tag soup => XHTML, XHTML => Markdown, and with a ‘various codepages’ => UTF-8 conversion for good measure. Some content, especially the older stuff, may not have entirely survived intact, especially in terms of broken links or images.

Latest entries: Feed icon

Tuesday 7 October 2025: 18:15. It’s been a while since there was a 100% pure post on my house build. No this isn’t the post about the insulated foundations design which may land before the end of this month – rather, this is about the outhouse which you may remember I have taken on 100% of the engineering and construction detail. I recently had to do more work on that design because we were thinking of ordering the insulation for the outhouse at the same time as for the insulated foundation. However then my engineer objected to my design not meeting the KORE agrément (which they’re supposed to meet to buy directly from the factory), so I’ll instead source raw sheets of EPS from a building provider and do things my way.

As I now have a nearly complete set of construction detail for the outhouse, this post will be necessarily quite long. My apologies in advance, however never let it be said that you won’t be getting the full plate on my temporary foray into architect-engineer-builder engineering. As this post is so long, I’ll be making my first ever use of Hugo’s Table of Contents feature:

The design goals for the Outhouse

As described in further detail back eighteen months ago, my architect had done up a basic design for the outhouse for planning permission purposes. He had it 5.1 metres wide (4.0 metres internal) and 10.36 metres long (8.71 metres internal), with a flat roof. Those 550 mm thick walls look passive standard thickness, and in that you’d be correct. However I actually only wanted NZEB build standard i.e. that this outhouse would meet minimum legal habitable standards in Ireland, but for it to cost the absolute minimum possible per sqm possible. The reason for the very thick walls is actually so I can use the cheapest possible insulation, which is bulkier than the expensive stuff. And because it’s better to submit thicker and bigger for planning permission, as you’re allowed build smaller but not larger.

To remind everybody of the architect’s design:

And to further remind everybody of the minimum legal build standard requirements in Ireland between 2019 and 2029:

  • Floor: <= 0.18 W/m2K
  • Walls: <= 0.18 W/m2K
  • Flat roof: <= 0.20 W/m2K (but any other kind of roof is <= 0.16 W/m2K)
  • Glazing: <= 1.4 W/m2K
  • Primary energy: <= 43 kWh/m2/yr
    • Of which at least 24% must be ‘renewable’
  • CO2 emission: <= 8 kg/m2/yr
  • Air tightness: <= 5 m3/hr/m2

These aren’t that much laxer than Passive House – apart from the air tightness – so as you will see, a fair thickness of insulation will be needed.

Some more reminding: here are approx costs at the time of writing (Oct 2025) for various insulation types in Ireland per 100 mm thickness per m2:

  • €10.07 inc VAT white EPS70 board, 0.037 W/mK thermal conductivity, score is 0.373.
  • €12.80 inc VAT graphite enhanced EPS70 board, 0.031 W/mK thermal conductivity, score is 0.397.
  • €18.60 inc VAT PIR board, 0.022 W/mK thermal conductivity, score is 0.409.
  • €49.18 inc VAT phenolic board, 0.019 W/mK thermal conductivity, score is 0.934.

The score is simply the price multiplied by the thermal conductivity with the lowest being best (i.e. lowest thermal conductivity for the least money). The white EPS is approx 19.4% worse an insulator than the graphite enhanced EPS, however it is 21.3% cheaper so it is better bang for the buck. Therefore, using more thickness of white EPS is cheaper than using better quality insulation which is exactly why I instructed my architect to use 550 mm thick walls for the outhouse in the planning permission.

The latest design for the Outhouse

This has changed a bit since my last post on the outhouse, but is essentially the same idea: as simple and as cheap as possible:

As you can see, the u-values are just below the Irish legal maximums, except for the floor. You’ll also see the more expensive graphite enhanced EPS100 in the floor. This is to match thermal conductivity with the EPS300, which while a bit more expensive it does makes things easier as you don’t need to care about potential interstitial condensation differentials etc. There is another motivation: the walls and roof can be easily upgraded later if needed, whereas the floor that’s likely there forever. In fact, that’s the motivation behind the perhaps excessive 100 mm ventilated cavity, if down the line we want to add +50mm of EPS to the walls without changing the outside, it should be very easy to do so.

This isn’t the only place where I’ve spent more than absolutely necessary out of a desire to make calculating and building the thing easier – the foundations are fully wrapped with insulation instead of being traditional strip foundations, which would be cheaper. This is the difference, picture courtesy of KORE:

Strip foundations require trenches to be dug under all walls, the bottom filled with liquid concrete, then underground walls of blockwork built (called ‘deadwork’) with underneath the floors filled with rubble, then a layer of EPS or PIR, then the concrete floor. Whilst cheaper and by far and away the most commonly employed in Ireland, I decided to go for a simplified edition of the KORE insulated foundation instead, despite it costing a bit more. The reasons are similar to putting better than necessary insulation into the floor – once it’s done, it can’t be amended later – but also because a fully EPS wrapped insulation is far simpler to calculate structural loadings, and to construct it’s just levelling gravel and running a whacker over it, something I could do myself if I needed to (whereas strip foundations are a two man job). I therefore reckoned, on balance, it was worth spending a little more money for ease of everything else, plus the guaranteed lack of thermal bridging simply makes this type of foundation superior by definition.

The roof and walls are as cheap as I could make them. They are also easy to construct, and again 100% doable on my own if necessary (though an extra pair of hands would make some parts much quicker). The roof, being just timber and polystyrene, is nearly light enough that I could lift one end of it. So by far the main loading on the foundations is the single layer solid concrete blocks solely chosen because they’re cheap and easier than me having to manually construct timber frames. Twenty four solid concrete blocks laid on flat at 20 kg each is 5.1 metric tonnes per m2, which is almost exactly 50 kPa of pressure on the concrete slab at the base. EPS300 is called that because it will compress by 10% at 300 kPa loading – it will compress by 2% at 90 kPa. So even if the blocks were directly upon the EPS300, they would be absolutely fine as this is such a light structure.

I have them on a 150 mm thick concrete slab however, and this is the main deviation from the KORE agrément requirements. KORE require this:

… which has the block leaf wall bearing down on 250 mm of concrete reinforced with two layers of A393 mesh, which is 10 mm diameter steel at 200 mm centres. And if my walls were loading as much pressure as a two storey house with a slate roof on top, I would absolutely agree. However mine is a single storey with a timber + EPDM flat roof on top. I think the KORE requirements excessive for my use case, so I told my engineer to not worry about including the insulation for the outhouse in the KORE order, I’ll sort out loose sheets from a building supplies provider (more on that below).

Is it actually safe to ignore the KORE agrément requirements for this use case?

Just to make absolutely sure I’m right on this, is a 150 mm thick RC slab with A252 steel mesh sufficient? The slab will be subject to these forces:

  1. Compression, from the weight bearing down.
  2. Stretching, from the bottoms of the walls trying to splay outwards (this is called ‘tension’).
  3. Bending, from the weight bearing down in some parts but not in others (this is called ‘flex’).
  4. Shear, from the forces in one part of the slab being opposed to forces in other parts of the slab.

Concrete is great at compression on its own, but needs reinforcing to cope with bending or shear. For C25 concrete:

  • Compressive strength: 25-30 MPa.
  • Tensile strength: 2.6-3.3 MPa.
  • Flexural strength: 6.6 MPa.
  • Shear strength: 0.45 MPa (yes, this is particularly weak).

One must therefore particularly worry about shearing concrete (which I’ve personally witnessed many a time occurring, indeed if you whack any concrete with a hammer it’ll readily shear off chunks without much effort), and to a lesser extent stretching concrete. To solve those issues, one usually adds fibres or steel into the concrete mix to improve the durability of concrete under load.

A252 steel mesh, as I specified above, is 8 mm steel at 200 mm centres. The type of steel is usually B500A:

  • Tensile strength: >= 500 MPa.
  • Shear strength: >= 125 MPa.

I reckon that there is 0.00005 m2 of steel per strand, 4.5 strands per metre, so 0.000245 m2 of steel per 0.15 m2 of slab in the horizontal, or 0.163%. In the vertical, you would have twenty strands per metre, so 0.001 m2 of steel per m2 of slab in the vertical, or 0.1%.

Therefore, for A252 steel mesh alone, we would have 500 kPa of tensile strength in the vertical, and 123 kPa in the horizontal. Therefore, the mesh on its own could happily take the full load of both of the walls hanging off it horizontally, never mind vertically.

You are now about to ask what is the strength of concrete with reinforcing steel combined might be? I thought that there would be a table somewhere with thickness of concrete, grade of concrete, type of mesh and location within the concrete slab. If there is such a table, I cannot find it. The best I can find are reinforced concrete beam calculators, which put the steel at the tensile side of the load and optionally another steel at the compressive side of the load. These are for beams which span a distance unsupported, not for slabs which are fully supported their entire length (and therefore by definition cannot deform under loads). I’ll have to admit defeat on that.

The naïve calculation to combine the steel and the concrete is to just add them, though I think that too naïve. Fairly obviously, the steel will distribute point loads more evenly across a wider area of concrete, because it’s ‘stretchy’ relative to concrete. Big point loads should become lots of small point loads inside the slab. So almost certainly the naïve calculation is a lower bound. For a 150 mm RC slab along the length of the slab:

  • Compressive strength: 3750 kPa.
  • Tensile strength: 513 kPa.
  • Flexural strength: 990 kPa.
  • Shear strength: 98 kPa.

Which seems to me more than plenty for a 50 kPa load tugging on the ends, never mind bearing down onto the top of the slab:

  • Compressive strength: 25-30 MPa.
  • Tensile strength: 2.6-3.3 MPa.
  • Flexural strength: 6.6 MPa.
  • Shear strength: 192 kPa.

… which is a shear strength nearly 4x stronger than needed.

Assuming that my maths and understanding of structural engineering is just plain wrong, let’s also take a common sense approach. I note that in the KORE agrément internal heavy load bearing walls are also on A252 steel mesh, but they deepen the concrete from 150 mm to 250 mm and add a second layer of A393 steel mesh at the bottom to act as the tension side reinforcement. If a wall is not load bearing, they don’t use thickening and a single 150 mm layer of concrete with A252 steel mesh is enough.

For that reason, I put the 205 mm of excess mesh off each side of the 4.8 metre wide A252 sheet under the outer walls. It is redundant I think, but as it would have to be folded under or cut off and wasted, I reckoned might as well use it for tension reinforcement. KORE think that the A252 steel mesh ought to run with 75 mm spacers underneath. The smallest RC spacer appears to be 35 mm, so 35 - 43 mm would be the bottom mesh, and 75 - 83 mm would be the upper mesh, giving 32 mm of concrete between the meshes. That’s less gap than ideal, but it’ll have to do I think.

Just for completeness, if the building were two storey, you would have 100 kPa from the walls and maybe another 50 kPa from a slate roof, plus perhaps another 50 kPa from upstairs walls and floor. So let’s assume 200 kPa of load on the slab edges. If one has 250 mm of concrete with two layers of A393 mesh and a third layer of A252 mesh (as per the KORE agrément diagram):

  • Compressive strength: 6250 kPa.
  • Tensile strength: 1078 kPa.
  • Flexural strength: 1650 kPa.
  • Shear strength: 548 kPa (wow!).

Which has a 2.75x safety margin for a 200 kPa load, and that’s assuming all the upstairs floor bears onto the side structure and there are zero load bearing internal walls. In reality, you would have downstairs load bearing walls to offload from the sides and better spread loads across the slab evenly. So I think that my maths and how to calculate this stuff adds up.

Before moving on, I should repeat my caveat above that I am not a structural engineer, I don’t really know what I’m doing here, and all these numbers may be unsafely wrong. Please don’t trust anything I’ve done here, and instead hire a proper structural engineer!

The changes from the architect’s design

Because we now know that we are using solid concrete blocks which have dimensions 440 x 215 x 100, I slightly tweaked the architect’s layout:

The changes are:

  1. The width and length of the building are slightly reduced to reflect the 535 mm thick walls instead of 550 mm thick walls.
  2. The internal walls are now all 100 mm thick as that is a single concrete block on edge. I expect to directly paint those blocks and not finish them further than that.
  3. The door into the lobby on the right has been slightly moved upwards so the wall between the toilet’s window and the door is a little over one concrete block long.
  4. The wall at the bottom of the main gym open area is moved slightly downwards to make the gym space exactly 6.2 metres long, only because I like round(er) numbers.

Total internal floor space is 36.25 m2, which is more than the entire ground floor of my current rented house if all the internal walls were removed!

The insulation under the concrete slab

KORE supplies its EPS sheets in these sizes:

  • 600 x 1200
  • 1200 x 1800
  • 1200 x 2400

As you will have noticed in the KORE agrément above, they want you to lay multiple layers of 100 mm thick EPS ensuring that the joins don’t overlap. As Irish NZEB doesn’t require you to do that, I’ll be making my life much easier and laying sheets of 200 mm thick EPS, and gluing each sheet together. This is inferior, but it’s also much quicker and easier.

There isn’t much more to say here: I explained above why EPS300 is needed for the outer walls. I suppose I should mention why EPS100 is sufficient for the internal walls: EPS100 will compress by 10% at 100 kPa loading, and by 2% at 30 kPa loading. The inner walls are on edge rather than on flat, so that is a load of 24 kPa. The concrete slab is a further 3.42 kPa, so a total load of 28 kPa on EPS100 would be fine.

In practice, the concrete slab will spread the load of the inner walls across a much wider area, well below 30 kPa. The edges of the building are different, the slab can spread load only inwards, hence the smallest sheet possible of (expensive) EPS300 only around the outside edges.

If the building height doubled, you would get 100 kPa load on the outer edge of the slab. A 250 mm thick slab with added A393 mesh at the edges would add 6 kPa. You need to keep the distributed load on the EPS300 below 90 kPa, however the walls bear on 215 mm whereas the EPS300 is 600 mm wide, so that’s okay so long as the load is distributed across the 600 mm wide sheet (which is the point of the added bottom A393 mesh). Internal walls of solid concrete block on edge, so long as they don’t rise more than 2.5 meters and don’t support load from any ceiling above, should be fine on EPS100 internally. If they support the floor above in any way, then they would need EPS300 underneath them too.

The insulation for the walls

We make use of the big 1200 x 2400 sheets here to save on glue and effort. Above the two sheets we chop sheets into thirds to fill the gap at the top. You can see the glazing openings as red regions, again there the EPS sheets would need to be trimmed down.

As should be obvious, internally the floor to ceiling height should be 2.8 metres, consistent with the typical room height in the main house.

The insulation for the flat roof joists

As shown in the outhouse buildup above, the 4800 x 225 x 44 flat roof joists are spaced at 622 centres to avoid having to cut the 200 mm EPS sheets in between them. Yes this is a little too wide for walking upon, there will be a fair bit of flex, but I don’t expect to walk on the outhouse roof much.

Only at the sides are there additional 25 mm EPS sheets to close the gap between 200 mm EPS and the walls. Screwed onto the top of the joists is 18 mm of OSB, followed by a further 50 mm of EPS to thermally break the joists from the outside which is shown on the right. The ends of the joists also get 50mm of EPS thermally breaking them from the outside. There is, therefore, a continuous, unbroken, layer of EPS around the entire building. Rough white deal timber (spruce) and OSB isn’t too bad as a thermal bridge (~0.13 W/mK), but it’s still four times worse than EPS. Also, the EPS is vapour open, so it lets any interstitial condensation which might build up under the EPDM layer to transfer away.

Above the 50mm of EPS is another 18 mm OSB board to spread the load of walking over the EPS more evenly, and then the EPDM layer which is the standard flat roof layer nowadays. It isn’t quite as cheap as bitumen felt, but it is much easier to work with and lasts longer. I’ll simply glue the EPDM to the upper OSB board.

The vapour open insulation design is important for this specific site’s climate. I paid for a moisture buildup analysis many years ago now and we discovered to our horror that our PIR board based external insulation when within a double leaf concrete block wall would be prone to experiencing runaway moisture buildup given the humidity and weather at our specific location. That led to a very expensive and very delaying refactoring of the main house to use cellulose insulation instead. EPS, unlike PIR, is much more vapour permeable and shouldn’t have the runaway moisture build up problem. This nasty shock did also play into my decision to choose a 100 mm instead of 50 mm ventilation cavity – also, by keeping the EPS further away from the driving rain outside, it should further reduce moisture buildup.

What does PHPP think?

There is absolutely zero chance that this building will meet Passive House. But I thought it would be useful if I fed this building into PHPP to see how it might fare in terms of energy modelling.

I gave PHPP the buildups, dimensions etc and told it to assume Munster Joinery’s cheapest triple glazing on the basis that I believe it is now very similarly priced to their cheapest double glazing, but you get 1.2 W/m2K u-values instead of 1.4 W/m2K with the double glazing. I told it about the heat recovering ventilation, and told it that would ventilate at 10 m3/hr (see below). There is no hot water generation, nor heating system, nor internal heat gains from occupancy, so I zeroed those and then I reduced the winter indoor temperature until no space heating was needed, which turned out to be 11 C. It thinks 106 kWh will be used per year to run the ventilation unit, particularly in summer to prevent overheating which it successfully does (maximum temperature is 22 C in July).

Out of curiosity, I then restored the winter indoor temperature to 20 C, and it now thinks that 977 kWh of space heating would be needed. This is 27 kWh/m2/yr which is well below the Irish NZEB maximum of 43 kWh/m2/yr.

I effectively get free electricity except for Nov-Dec-Jan-Feb, so for just those months the space heating needed would be 729 kWh. Therefore 25.4% of the primary energy requirement would be renewable, which is above the Irish NZEB maximum of 24%.

Finally, PHPP calculates u-values a little differently to conventional, so I’ll list here what it thinks the assembly u-values are:

  • Floor: 0.152 W/m2K
  • Walls: 0.16 W/m2K
  • Roof: 0.149 W/m2K

The reason these are better is because PHPP doesn’t include an adjustment factor for thermal bridges, because you tell PHPP about each one individually. Because the building is wrapped with EPS, my main thermal bridges will be around the glazing specifically where the frame meets the concrete blocks.

I may ‘solve’ this cheaply by wrapping every window opening with 25 mm of EPS, though to be honest PIR board would be better here as it’s much better performing at this thickness, and has a compressive strength of 150 kPa or so. You then fasten the windows through the board into the concrete. Normally you can’t use PIR board for this because it can’t stay damp and it doesn’t like the alkalinity of the cement in render, but because I’m timber clad I can get away with it here. The main house uses the very expensive Bosig Phonotherm board to thermally break the timber studs around the glazing reveals precisely because it is compatible with being rendered, but I think I can avoid using such expensive material here.

Bill of materials, and estimated cost

Totalling up all of the above:

Foundations

  • EPS100 silver 200 mm:
    • 11x 1200x1800
    • 4x 600x1200
  • EPS300 200 mm:
    • 31x 600x1200
  • A252 mesh 2400x4800
    • 5x

Walls

  • East:
    • EPS70 white 200 mm:
      • 6x 1200x2400
      • 2x 1200x1800
      • 2x 1200x1800 in thirds = 6x 400x1800
    • Pressure treated battens 50 x 35 x 4800:
      • 4x vertical
      • 4x horizontal
    • Glazing reveals 25mm PIR board:
      • 3x 300 x 2400
  • North:
    • EPS70 white 200 mm:
      • 4x 1200x2400
      • 1x 1200x1800 in thirds = 3x 400x1800
    • Pressure treated battens 50 x 35 x 4800:
      • 2x vertical
      • 2x horizontal
  • West:
    • EPS70 white 200 mm:
      • 6x 1200x2400
      • 2x 1200x1800
      • 2x 1200x1800 in thirds = 6x 400x1800
    • Pressure treated battens 50 x 35 x 4800:
      • 4x vertical
      • 4x horizontal
    • Glazing reveals 25mm PIR board:
      • 3x 300 x 2400
      • 2x 300 x 2400
  • South:
    • EPS70 white 200 mm:
      • 4x 1200x1800
      • 1x 1200x1800 in thirds = 3x 400x1800
    • Pressure treated battens 50 x 35 x 4800:
      • 2x vertical
      • 1x horizontal
    • Glazing reveals 25mm PIR board:
      • 4x 300 x 2400

Joists

  • EPS70 white 200 mm:
    • 64x 600x1200
  • EPS70 white 50 mm:
    • 2x 1200x2400 in quarters = 8x 300x2400
  • Rough white deal 225 x 44 x 4800:
    • 15x
  • EPS70 white 25 mm:
    • 36x 600x1200

Roof

  • EPS70 white 50 mm:
    • 16x 1200x2400
    • 4x 600x1200
  • OSB 18 mm:
    • 36x 1200x2400

I get:

  • EPS300 200 mm @ €45 inc VAT per sqm:
    • 31x 600x1200
  • EPS100 silver 200 mm @ €36 inc VAT per sqm:
    • 15x 1200x1800
  • EPS70 white 200 mm @ €20 inc VAT per sqm:
    • 16x 1200x2400 = 6x + 4x + 6x
    • 14x 1200x1800 = 4x + 1x + 4x + 5x
    • 64x 600x1200
  • EPS70 white 50 mm @ €5 inc VAT per sqm:
    • 18x 1200x2400 = 2x + 16x
    • 4x 600x1200
  • EPS70 white 25 mm @ €2.50 inc VAT per sqm:
    • 36x 600x1200
  • PIR 25 mm @ €10.86 inc VAT per sqm:
    • 3x 1200x2400 = (3x + 3x + 2x + 4x) / 4
  • 15x rough white deal 225 x 44 x 4800 @ €27.45 inc VAT each
  • 16x EPS glue @ €17 inc VAT each
  • 5x A252 mesh @ €55 inc VAT each
  • 36x OSB 18mm board @ €26 inc VAT each
  • 23x Pressure treated battens 50 x 35 x 4800 @ €5.38 inc VAT each
  • 40 pales of solid concrete blocks @ €58 inc VAT each
  • 40x bags of cement @ €8.75 inc VAT each
  • 3.5 tonnes of sand @ €65 inc VAT each
  • 12 m3 of T2 stone @ €46 inc VAT per m3
  • 15x white paint 10 litres @ €24.95 inc VAT each
  • 18x plasterboard 12.5 mm @ €16 inc VAT each

Which comes to €11,181 inc VAT. Add PC sums for these:

  • Approx €6k inc VAT for the charred larch outer cladding
  • Approx €8k inc VAT for the glazing
  • Approx €1k inc VAT for wiring
  • Approx €500 inc VAT for toilet + sink + mirror
  • Approx €500 inc VAT for internal doors

I reckon total materials cost is approx €27k inc VAT. I left off a few things like damp proof course, radon barrier, air tightness tape and fixings, never mind machine rental, so let’s call it €29k inc VAT. Which is 1k more than the last time I estimated this back in April 2024 using much less accurate calculations – well done me!

At 36.25 m2 of internal floor space, I make that €800 inc VAT per sqm fully finished excluding labour.

Obviously this isn’t a habitable building, you would need to add at least a shower and a cooking area. But even if that took the price to €31k, you’re still looking at €855 inc VAT per sqm. That is way, way, way cheaper than a typical Irish new build right now which is coming in north of €2,500 inc VAT per sqm. The reasons why are:

  1. To grant a mortgage, the banks insist on a non-flammable outer leaf, so you end up installing a completely unnecessary outer block leaf like I had to for the main house. That adds considerable complexity that this ‘non-standard’ buildup avoids, plus you have to add render and usually paint to that outer block leaf.
  2. A flat roof is very considerably cheaper than a tiled roof, especially as it can be made so lightweight that it reduces the cost of everywhere else in the house.
  3. By using passive house thick walls, I could use the cheapest possible insulation even though I’m only targeting NZEB levels of insulation. Thicker is cheaper, in other words.
  4. In most places land space is constrained by zoning, so two storey houses make more sense. You could extend this buildup to two storeys very easily, you would need a 250 mm base slab or use strip foundations instead. That would increase the foundation costs, but as you would get nearly twice the internal floor space, it would likely be even lower cost per sqm again.
  5. Finally, the chances of getting planning permission for an entirely flat roofed building are going to be low in most parts of Ireland. Your very expensive Irish new build is in part that way due to planning permission constraints and requirements.

I suppose I have left off one big thing: this building on its own wouldn’t meet the renewable energy requirement, so you’d need to fill the roof with solar panels, so that’s another few thousand of cost. There isn’t a heating system, though with this level of insulation electric heating is probably acceptable at around €200 of cost per year. I am actually going to fit a MVHR unit for ventilation which I already have purchased (so I didn’t include it above), it’s a small Mitsubishi VL-100EU5-E unit which can move either 60 or 105 m3/hr which should be plenty even during a gym workout. It doesn’t have the best heat recovery, only 80%, but it is ESP32 controlled and so will only turn on for short periods during the day if nobody is there. You might only need 0.33 m3/hr/m2 if a building is unoccupied, therefore 12 m3/hr should be plenty to prevent staleness. One might therefore run the unit for ten minutes each hour.

In a proper habitable building, due to the airtightness you would need a much better MVHR system, so that plus its associated ducting would be another few thousand of cost. Still though, around €1000 inc VAT per sqm fully finished but excluding labour is probably doable.

How much might labour cost? Thanks to its extreme simplicity, two people should be able to complete this building in four weeks I reckon. At €300 per day each, that is €14,400. That would take the cost up to €50k, which is pretty much spot on what the Quantity Surveyor estimated that this outhouse would cost. That is €1,400 inc VAT per sqm incidentally, which still looks great compared to a current Irish new build.

What’s next?

Next weekend myself and Megan will be going to London for a single night for a birthday party. After that I expect no more travel until Thanksgiving, where we shall be visiting Megan’s brother in England for the annual turkey dinner.

I’ve spent almost all of three days writing up the above, so I’m pretty sick of writing virtual diary entries. I think the entry about GrapheneOS will therefore almost certainly occur after I get back from London. The remainder of this week will go on open source project work, and trying to get out to get some exercise – the weather has been very unhelpful on that recently.

After the virtual diary entry on GrapheneOS, I don’t expect further entries until the insulated foundation design for the main house is complete. I have plenty to be getting on with after this recent blast of writing on this website: I need to circle back onto my WG14 standards papers first, then force myself to complete the 3D house services layout. If I can get both done before new employment begins I would be very pleased, but if unemployment continues I have many more months of items on my todo list to iterate through. I would be surprised if I could complete that todo list before Spring 2026.

#house




Thursday 2 October 2025: 08:35. I am returned from Spain! And so begins the next year of grind, as Megan resumes her last and final year of Chartered Accountancy studies which will involve another year of keeping the children outside the house so she can study. This is okay during the warmer months, but it absolutely sucks for all in the cold, dark and wet months – on some days in previous years we literally walked around Mallow river park in the driving rain as the least worst option available to us. Joyousness!

As anticipated, I have not noticed any improvement in software role hiring which would normally be the case when the summer ends and people come back from holidays. My current bet is that there may be a slight pickup for the new financial year starting from January, so there is no point starting to look for work until November when next year’s headcount budget might start coming into shape for employers. Even then, I expect the bulk of any new openings to require onsite, and specifically to not permit fully remote. So my unemployment may hence continue into 2026, which is unfortunate as without employment I cannot get a mortgage, and without a mortgage I am about €100k short of what is needed to bring the building to exterior completed.

My ideal would be a twelve month fully remote contract doing unstressful work such as a maternity leave cover or similar. My last two contracts were for fast paced startups, and if I’m honest, I’m feeling a bit tapped out by fast paced startups right now. Not that there are many of those going currently judging by HackerNews, it looks like startup VC funding has also shrivelled up, which is unsurprising given the recent rise in the cost of borrowing.

Anyway, it’s moot what I would prefer, given this recession it’ll be more about what I can get at all. Still, come November I should start actively searching for and applying for roles, which I haven’t been doing so far as I’ve been too busy and there didn’t seem to be a point in the current market. Hopefully Monad will have shipped mainnet by then, and my informal promise to them to stick around until mainnet would then have been fulfilled.

So what’s for today? As mentioned in previous posts, two months ago I finally got a new mobile phone after an unusually long time with the previous one. As I usually do on here, I like to write a comparison of the previous phone to the new one – here were the last two comparisons before this one:

Why now, and why the Google Pixel 9 Pro?

My last phone upgrade was in Summer 2020. That means I’ve been using the S10 for five, straight, years. That’s unheard of for me – I was on a predictable two yearly replacement cycle occasionally nearing into a three year cycle if a specific model lasted better than the others. I can’t remember any ever lasting more than three years for one simple reason: the battery always went on them. Until the S10.

The S10’s battery life is diminished from what it was, but I’ve had zero issues with it powering off during taking long video recordings or hammering the photo taking on the camera or anything else which draws ‘too much’ current from an old battery. I have had zero issues with it getting sensitive to the cold, like that ‘fun’ time with the HTC 10 in Northern Ireland where I desperately needed to take some pictures, but the phone kept cutting out because it was absolutely baltic outside. I have no idea what Samsung did to so massively improve the battery chemistry, but whatever it was, it’s like night and day to previous phones. Even today, five years later, it’ll still – just about – make it through a day without being recharged even if being used to navigate London’s public transport, as I did with the kids last July. Indeed, I expect to keep using the S10 mainly as a podcast player, as it can be jammed under my head easily when I’m going to sleep as the new phone is far thicker and therefore not as comfortable.

The other reason why I felt no urge to upgrade is that newer phones were inferior to the S10 for most of the past five years. To take just the Google Pixel series as a comparison:

Galaxy S10Pixel 6Pixel 6 ProPixel 7Pixel 7 ProPixel 8Pixel 8 Pro
Release date2019202120212022202220232023
Personal showstoppersNoneDisplay is inferior; no telephoto camera76 mm wide vs 70 mm wide for the S10; 6.7 inches is too big for a phoneDisplay is inferior; no telephoto camera77 mm wide vs 70 mm wide for the S10; 6.7 inches is too big for a phoneDisplay is inferior; no telephoto camera77 mm wide vs 70 mm wide for the S10; 6.7 inches is too big for a phone

So when the Google Pixel 9 Pro came out in 2024 with a 72 mm width and 6.3 inch display without any compromises in the display or cameras, I finally had a Pixel phone I could get interested in. I just needed to wait until the following year for the price to become more reasonable, as there was no way I was going to be paying €1,450 inc VAT for a phone.

Why am I limiting myself to only the Google Pixel series? This enshittification of phones after year 2020 was actually across the board. The Samsung phones after the S10 took a noticeable nosedive in specs-for-your-money. The S20 which came out immediately after the S10 was good, but only a year newer. After that, you have the same tradeoff as the Pixel phones between decent specs but too wide and too big, or markedly inferior specs for a similar width and size. Latest version LineageOS support also stops after the S20, so that pretty much eliminates Samsung from consideration. For other marks, apart from Google only Sony Xperia, Xiaomi 13 and OnePlus 12 have latest version LineageOS support. The Xperia is a lovely phone but hideously expensive even when bought used, and the Xiaomi 13 and OnePlus 12 also both have the too big vs inferior spec problem. The latest models of the other marks have also returned to smaller phones with no compromises in spec: Megan will almost certainly be getting an Xiaomi 15 when the 17 launch last month has had some time to reduce the price of the 15, but Xiaomi look like they’ll be preventing custom ROM installation soon which doesn’t matter for Megan, but does for me. So – to be blunt – Google Pixel 9 Pro is the only game left in town. It cost me €950 inc VAT, whereas the S10 back in the day I acquired for around €500 inc VAT, so these newer phones are not good value for money compared to five years ago, most of which I would blame on a marked loss of competition in hardware I can easily run my own firmware upon. The only good news is the Pixel is far cheaper than a Sony Xperia, which has used car type pricing.

There is another big motivation behind Pixel phones only: GrapheneOS which is a privacy focused fork of Android only works on Pixel phones. It will be another, separate, post here on that as I only want to concentrate on the hardware differences this post. But suffice it to say for now that I felt that my historical approach of using MicroG to replace Google Play Services had run its course and I needed something better as my degoogled daily driver going forwards.

Comparing the Samsung Galaxy S10 to the Google Pixel 9 Pro

There will be a little apples to oranges comparison problem here. The S10 had a sdcard slot, so I could happily get the smallest storage edition and fit a large, fast, sdcard. And TBH, that was amazing, and I really wish you could still get a sdcard slot on flagship phone without paying the hideous cost of the Sony Xperia, because if the phone dies for any reason then you don’t lose most of your data. But given that that ship sailed four years ago and that ship has not returned since, I suspect it’s gone for good now.

Value for money

€500 in 2020 is about €614 today, so the Pixel 9 Pro is almost exactly 50% more expensive. Now, to be fair, my Pixel has the maximum possible onboard storage (512 Gb) to make up for the lack of sdcard, whereas the S10 had the minimum possible (128 Gb). However, flash prices are exponentially cheaper since then too, so result: S10 win.

OS

The S10 ran a heavily-modified-by-me edition of OneUI 3.1, which is based on Android 11. There was an Android 12 release, and I really should have upgraded my phone and redone all my customisations. But it was so much work and I just didn’t despite the security risks. Of course, Android 12 is now also orphaned and not receiving security updates either, so it’s moot.

The Pixel 9 Pro is running GrapheneOS, which is based on Android 16. Due to how I have configured GrapheneOS, it is undoubtedly more awkward to use than the Samsung, but that’s my choice. I have not found anything in Android 16 to make it particularly stand out from Android 11, if I am really honest (I found the same from Android 9 for the HTC 10). Result: Draw.

CPU

The S10 has an eight core setup with four performance cores and four efficiency cores. So has the Pixel 9 Pro. The latter runs at peak about 10% faster clock speeds, however benchmarks show an almost exactly double the performance in each of single core, multi-core and graphics. It also has exactly double the RAM (16 Gb vs 8 Gb).

To use, the Pixel 9 Pro is obviously a bit faster to use. I’m not sure if it’s more the faster display refresh rate, but there isn’t much in it in my opinion. I would caveat that GrapheneOS runs every service and every app inside its own virtualised container for security, and it is well known that GrapheneOS runs a good bit slower than stock as a result. I’ll still call it – just about – for the Pixel 9 Pro. Result: Pixel 9 Pro win.

Display

As I’ve mentioned on here before, the S10 has the best display my eyes have personally ever been laid upon. It could render 113% of DCI-P3 at brightnesses plenty to see easily in bright sunshine outdoors whilst wearing sunglasses. It could also dim itself at night time to very low levels for reading without disturbing Megan. It is very colour accurate, has oodles of contrast, all with a 550 ppi density. It is an absolutely fabulous display.

The Pixel 9 Pro has a lower resolution display at 495 ppi, so on that it is inferior – though you’d only notice if putting the phone into VR goggles, and Google has decided we can’t do that any more (while those apps still worked, the S10 was absolutely amazing when used to view VR thanks to such a high density display). I put both phones side by side, cranked both to maximum brightness, and had them render the exact same Rec.2020 wide gamut 4k resolution 60 fps videos. Hand on heart I could not differentiate between them. Both had identical brightness, identical colour rendering, identical images except for some slight HDR tone mapping fringing in one part of one video on the S10, which is absolutely a software bug and may well have been fixed had I bothered to upgrade it to Android 12. And even with that HDR tone mapping fringing, it would have been unnoticeable if I didn’t have a side by side comparator (it looked to me like a math rounding bug, quite subtle and only present in a very short scene amongst several videos).

On the one hand, it’s poor that it has taken five years for other phones to catch up with the S10’s amazing display (which also appears to have completely unaged from my testing). On the other hand, it shows how in 2019 Samsung was fitting the future of all phone displays to their flagships, and all the early issues with OLED displays going stripey over time (like with my first two OLED display phones) have been fixed. Result: Draw.

Audio

The S10’s speakers were much more tinny than the HTC 10’s, but far louder so I could now hear the radio in the shower. This was very welcome at the time of the upgrade. Due to its much wider diameter speakers, the Pixel 9 Pro returns more bass to the upper midrange without losing the maximum volume – in fact, I think at maximum volume it might just be a touch louder than even the S10.

I’m unsure, however, that the Pixel 9 Pro’s speakers are better. The extra upper midrange bass is welcome, but it seems to muddy the sound in a way I don’t much care for, and which I don’t remember happening in the HTC 10 which had lovely, if not loud enough, speakers for their size.

Don’t get me wrong – the Pixel 9 Pro speakers are plenty good enough for all the uses you’ll need them for. Playing Massive Attack’s Teardrop at maximum volume is absolutely acceptable, there is no distortion, there is as much bass as a ~5 mm diameter speaker can generate, and the audio is clear and loud enough to fill a room. It just sounds … unbalanced … somehow. Almost certainly something which could be tweaked in an equaliser, but it just seems to me like whomever at Google didn’t put quite enough tuning effort into the phone’s speaker configuration in the software side of things. Whereas while the S10’s speakers have no bass at all because they’re much smaller, the sound which emerges is very reasonable to my ears for what they are: more balanced. Like, it’s not trying to be something which it can’t do as hard as the Pixel 9 Pro at full volume.

Putting both devices side by side at half volume, I gotta be honest: the S10 renders music better. The sound is clearer, better balanced, and not slightly muddy and unbalanced like the Pixel 9 Pro.

There is also that elephant in the room that as with all recent phones, the Pixel 9 Pro does not have a headphone socket while the S10 does. And I still have plenty of devices incapable of Bluetooth audio, for which I had to go buy a bunch of Bluetooth audio adapters so the Pixel 9 Pro can render to them. So I think at this point the result is clear. Result: S10 win.

Camera

The S10 has three cameras on the back: (i) 12 MP wide with hardware image stabilisation (ii) 12 MP telephoto with 2x zoom with hardware image stabilisation (iii) 16 MP ultrawide. These could capture video in HDR at 4k @ 30 fps, or 1080p at @ 60 fps, and though the HDR gamut was not as accurate as perhaps it should have been, you’ve seen many of those captured videos on this website in the past and they’re very good. The selfie camera wasn’t great, 10 MP with a good bit of graininess and the colour reproduction always looked washed out. But it wasn’t bad either, and better than the rear cameras on many phones e.g. the Galaxy S7 which Megan had before her S10.

I was very happy with the cameras on the S10 over the past five years – yes if zoomed into to the max on the photos there was excessive smoothing and sharpening, and to be honest reducing by three quarters the resolution of all photos was almost always wise. But it generally took really excellent ~3 MP photos with great colour balance and detail, and the ultrawide was useful in many constrained space situations as was the telephoto especially for taking show-and-tell shots for this website without shadows of me from the ceiling lights messing up the shot.

The Pixel 9 Pro also has three cameras on the back: (i) 50 MP wide with hardware image stabilisation (ii) 48 MP telephoto with 5x zoom with hardware image stabilisation (iii) 48 MP ultrawide. These too can capture video in HDR at 4k @ 30 fps, or 1080p at @ 60 fps, and with better to my eyes HDR gamut accuracy. The selfie camera is a 42 MP ultrawide, and looks just as good as the rear cameras. As already mentioned on this virtual diary, thanks to the newer Android version, photos now also encode HDR via a gain map extended JPEG.

Fully zoomed in, the images are a bit grainy, but neither over smoothed nor over sharpened. Similar to the S10, reducing the resolution by three quarters is also almost always wise. But now you get a ~12 MP high gamut high quality photo, whereas the S10 can only do a ~3 MP standard gamut high quality photo. Here are examples of the exact same scene taken at the exact same time using the S10 and the Pixel 9 Pro where you’ll easily notice the slightly wider field of view of the Pixel 9 Pro’s main camera, and the 4x more detail is very apparent:

I suppose it’s not really a contest, at least for the main camera. The ultrawide on the back is also great, and for the selfie camera it’s not a contest: the Pixel 9 Pro wins hands down.

For the telephoto however, I’m more ambivalent. If I have a shot where the 5x zoom is handy e.g. taking a picture of horses at a distance as so to not spook them – it’s hands down better. However, for that use case, I’d rather prefer a 10x zoom if I’m honest. If I’m doing show-and-tell shots, the 5x zoom is too much, and I end up digitally zooming my main camera instead which is okay I suppose given its very high native resolution. That leaves the 5x telephoto in an odd position for me – I don’t think I’ll use it anything like as frequently as I did the telephoto on the S10. For me, for what I use cameras on the phone for, it doesn’t have a good trade off in my opinion. Taking it to 10x zoom or more would tick my box, and I suppose I can still digitally zoom that 48 MP image up to 10x. But if it were 10x optical zoom, I could digitally zoom in much further as in like a telescope, and that is genuinely very useful especially when you live rurally and do a lot of walking around in nature.

With those caveats and concerns listed, I’ll call the blindingly obvious. Result: Pixel 9 Pro win.

Fingerprint reader and buttons

Back when I got the S10, I found its below-screen ultrasonic fingerprint reader inferior to the physical button on the bezel below the HTC 10’s screen. Subsequent firmware releases have significantly improved the S10’s fingerprint reader, and it’s nearly as good as the Pixel 9 Pro’s, which is a little bit better again. I’d still take the physical button personally, but between just these two phones fingerprint based access is basically identical.

The S10 annoyingly put its volume buttons on the left side, which ruined the use of any case which folds over from the left as the volume buttons become useless. I therefore ended up using a case without a front cover, and unsurprisingly I then cracked the screen when I dropped a tool on it. The Pixel 9 Pro puts ALL its buttons on the right side, so cases with a left folding cover now just work. However, if I am honest, the Pixel 9 puts those buttons in the wrong place – the power button is way too low (I assume to not clash with the camera module), and the volume buttons are exactly half way down the side which means any clasp on the case flap now covers those volume buttons. Which is so very avoidable and annoying .

Both phones kinda suck on button placement, so result: Draw.

Handfeel

The Pixel 9 Pro is undoubtedly much heftier than the S10. It’s bigger, and much heavier, and that’s very noticeable in hand feel. There is another big difference: the Pixel is explicitly designed to always be used with a case so it has the cameras explicitly bulge out and make the phone top heavy:

Once you then add the case, the Pixel 9 Pro becomes like a phone of years past: chunky, heavy, and noticeable in your pocket. It’s twice as thick as the S10 in its case, taller and wider, and weighs 321 grammes vs 217 grammes, so about 50% more weight.

Now for me personally I like a chunky heavy phone. I’ve said this on here a number of times going right back to the 2000s. The reason why is if I can feel it in my pocket, I notice when I’ve forgotten it, and there have been past phones which were so small and light I tended to misplace them frequently. I also think that the thinner the phone, the more likely it is to snap if in a back pocket when you bend down. I have few such qualms about the Pixel 9 Pro.

Given that I get back my cases with a folding front flap, and the overall improved durability, for me the result is: Pixel 9 Pro wins.

Summary:

S10 wins two; Pixel 9 Pro wins three; Draws were three. That’s surprisingly similar to the HTC 10 to Galaxy S10 comparison five years ago. Basically new phones of recent years are way better in maybe one thing, but on the rest they are similar or go slightly backwards. I guess that’s still progress, of a kind.

To be clear about this, I care more about the high gamut photo format than probably any other hardware related feature in the new phone, and that’s 100% software – the S10’s cameras were perfectly able to capture HDR if the software let them.

Where the biggest improvements for me with this upgrade will lie (apart from the improved battery life, obviously) will be mainly in being able to run GrapheneOS instead of a more traditional phone operating system. That I’ll write another post here about, either the next post or perhaps the post after the next post.

What’s next?

Apart from that post on GrapheneOS, there has been forward progress on the foundation design for my house. At the time of writing, I’ve seen a first draft of that foundation design, and I have already sent my architect a list of errata that I found with it. He’ll likely get onto that next week, so possibly by mid-October I’ll be able to do a show and tell post on those here.

As mentioned previously, we were thinking of ordering the EPS insulation for the outhouse at the same time as the house to save on delivery costs. Unfortunately my engineer felt they would need to insist on the outhouse design meeting the KORE agrément, and I felt that was massive overkill for a single storey single outer leaf EPDM covered roof outhouse which has far less loading on its concrete slab than a two storey double outer leaf slate covered roof building. I really want to build that outhouse for a minimum possible cost and effort, and if that means not meeting the KORE agrément, so be it. So I’ve refined the design somewhat since my post last May showing the proposed outhouse buildups, and I expect I’ll go with that when the time comes using loose sheets of KORE EPS from a building supplier. More expensive on the EPS yes, but less expensive on the concrete and reinforcing mesh, and definitely less hassle to build.

I’ll end this post with a few pictures taken using my Pixel 9 Pro along the nearby Analeentha greenway, and in Spain last week. I’m sure we’ll all agree they are very pretty:

Here’s the entrance to the Analeentha greenway using the main camera and telephoto to demonstrate ‘tunnel’ effect the 5x zoom telephoto camera enables:

Here are the walls, cathedral and shrine to St. Teresa in Ávila, Spain:

And finally, last post I showed you the inside of my old watch. I’ve since had the time to disassemble it.

Kudos, as usual, to Chinese designers for making the electronics they design entirely screw assembled and therefore easy to completely break apart and reassemble. There was nothing surprising in there that I found, and I found it both very well assembled and manufactured. The barometric pressure sensor and vibrator motor are clear on the PCB and parts, everything else is under the double sided metal shrouded top of the PCB. I didn’t bother lifting that off, the CPU and chipset are all proprietary designs for this watch model anyway, so nothing to learn.

#phone #s10 #pixel




Thursday 18 September 2025: 17:38. Tomorrow I’m off to Spain! As you’ll see a few weeks from now, earlier this week I designed some of the construction detail for the outhouse which became urgent because my structural engineers are currently designing the insulated foundations for my house after finally becoming unblocked by the builder. They should have a Bill of Materials (BoM) for installing the foundations while I am in Spain, which principally will consist of many pallet loads of structural extruded polystyrene (EPS). In order to save on delivery costs and take advantage of bulk order prices, I wanted to throw the insulation for the outhouse in with the insulated foundations order, and for that I needed to calculate and design enough construction detail to create a BoM for the outhouse.

You’ll see lots more about that in a future post, but this post will be about comparing my new Huawei Watch D2 to my former Amazfit GTS 2 Mini watch bought in 2021.

Why did I choose the Amazfit GTS 2 Mini back in 2021?

Before 2021, I hadn’t worn a watch since I stopped whilst attending Hull University in the late 1990s. Around then, mobile phones became good enough and reliable enough that you could be assured that when you checked them, they wouldn’t have run out of battery, they’d tell you an accurate time, and their alarm clock was usually reliable (the Nokia’s back then occasionally forgot to alarm, but it was rare). So having to bother with a watch was hassle, and I just stopped wearing mine which was a badly scratched wound up mostly plastic thing (which I still have, and it still works!).

Anyway, from about 2020 onwards smart watches began to not suck sufficiently well I began to think about buying one, and when Amazfit launched the GTS 2 Mini in 2021 I gave it three months to ensure it wouldn’t be a dud, and I bought one. The things I wanted the most at the time were:

  • Non-negotiable: It needed to work with Gadgetbridge, which is the enthusiast open source phone companion software which stores all your data on device in easy to extract SQLite databases. To be specific, it needed to work without the proprietary vendor app which uploads all your personal data to a cloud for somebody to monetise.
  • Must have: I didn’t want to ever notice it being on me, including at night time asleep. I was coming from zero watch, and 99% of the smart watches until then were big bulky heavy things which would have annoyed me.
  • Must have: I didn’t want to have to charge it more than once per week. Only the relatively featureless watches like a Pebble until then lasted a week on a single charge. Anything with slightly more features usually needed recharging daily, which was a showstopper for me.
  • Must have: I wanted an always on display, because the ones which turn on when you raise your hand annoy me. That almost certainly implied an AMOLED screen, which had only just began to be fitted to the budget end of smart watches in 2021.
  • Nice to have: I wanted GPS tracking to accurately track my exercise which Gadgetbridge would record over time.
  • Nice to have: A barometer, so the GPS tracking is useful when climbing mountains with the children e.g. GPS shows you spent two hours walking 1 km, when in fact you climbed 600 metres as well!

The Amazfit GTS 2 Mini supplied all the above and plenty more in a 19 gramme package and 1.55 inch display for at the time €85 inc VAT delivered. I was genuinely pleased with the device – I even bought Megan the reduced cost edition the following year which uses a LCD display instead of the AMOLED display and she is also very happy with it.

Unfortunately in July the screen popped off! What had happened is that the battery had swollen, and they had cleverly designed that to pop off the front to let you know you can’t use the device any more. Its battery life had recently been on the wane in any case having shrunk to three and a bit days down from five and a bit days when new (this is with the display always on, and me doing a few exercises with it per week), so I knew a replacement was coming sooner rather than later. The screen popping off just made replacing it more urgent.

Why did I choose the Huawei Watch D2?

Due to being unemployed, I had more time than usual to choose a replacement and I spent a good few days ooming and awwing over what direction I wanted to choose next. Should I choose another Amazfit? They had evolutionarily improved since 2021, albeit at added cost. But so had Gadgetbridge which now supported a much wider range of devices. One category of those stood out: Huawei/Honor devices paired immediately with Gadgetbridge without having to do any auth key extraction dance from the manufacturer’s cloud service. The fact they ‘just worked’ out of the box with Gadgetbridge was attractive. Huawei watches were also bleeding edge in terms of bang for the buck, they were very aggressively pushing superb hardware, constantly genuinely improving software, all at rapidly discounting prices over time after launch.

My list of must haves and nice to haves above hadn’t changed, though finding a new watch with a similar featureset to the GTS 2 Mini for under twenty grammes of weight has become hard. It came down to between the 30g Huawei Watch Fit 4 for €133 inc VAT, the 30g Huawei Watch Fit 4 Pro for €257 inc VAT, or the 40g Huawei Watch D2 for €278 inc VAT. The latter had one very standout feature: genuine true blood pressure monitoring! It implements this using an inflatable bag under the watch strap, and to my best knowledge, there is absolutely no other watch on the market right now which gives as accurate blood pressure readings as that watch.

Otherwise it is basically the Huawei Watch Fit 4 Pro with a bigger battery (524 mAh vs 400 mAh), albeit with a chunk more weight and size to accommodate the micro air pump and larger battery.

I’ll admit I did sleep on that decision for two nights. Such a big, bulky, watch was a gamble. I was also fairly sure that the inflatable bag would irritate my skin with sweat, so I wondered if buying it wouldn’t be a waste of money if I couldn’t wear the thing.

Eventually I did plump for the Huawei Watch D2, and having worn it for over a month now I’ve gotten used to it and I think it’s great – though I was right about the plastic bag upsetting my skin, but more on that shortly.

Here is the Huawei Watch D2 on my arm, it is a big, chunky, watch. Note the brown leather strap …

Yes that is an aftermarket leather strap fitted to it! The Chinese are great for aftermarket accessories, and Aliexpress has the right adapters to convert the proprietary Huawei strap mount into a conventional 22 mm watch strap. That solves my skin irritation problem.

I’ll get into it more shortly, however random blood pressure measurements aren’t all that useful. What you really want is blood pressure sampled regularly many times over a single day or week. Thanks to a quick strap change facility on the watch, I can very easily pop on and off the inflatable bag strap and I can swap one strap for another in under a minute. So when I need to do a blood pressure monitoring, it’s very easy. It also reduces the wear on the inflatable bag strap which doesn’t seem to me likely to last a year if the watch is being constantly removed and put back on – which you do really need to do for showers, because thanks to that mini air pump, despite Huawei’s claims user reviews are clear it’s best to not immerse this particular watch in water.

Finally I’ll mention now one particular unpleasant surprise with this watch: the supplied strap is NOT the one in units supplied to internet reviewers. It is this crap thing:

The strap which internet reviewers reviewed has a second metal piece attached to that first piece which lets you quickly and easily remove and fit the watch as it’s like a quick release lever. What consumers actually get on purchase (I am not the only one according to Amazon and Aliexpress reviews) is half the metal quick release lever which is now fused onto the sliding clasp.

Getting this strap on and off is therefore an absolute royal pain in the ass. My hand is nearly too big to fit through with the strap at its widest, so a lot of pressure gets put onto where the bag connects to the strap. It’ll be fine if you use this strap once per month. But daily, no it would rip the bag before long.

Hence if you’re considering buying this watch, buy an aftermarket strap with it and factor that into the cost.

Comparing the Huawei Watch D2 to the Amazfit GTS 2 Mini

Huawei Watch D2Amazfit GTS 2 Mini
Cost in 2025 euros€278 inc VAT delivered€101 inc VAT delivered
Dimensions48 x 38 x 1341 x 36 x 9
Battery524 mAh220 mAh
Battery life when new
(with display always on)
~7 days~6 days
Display size1.82 inches1.55 inches
Display resolution480 x 408354 x 306
Display1500 nits AMOLED450 nits AMOLED
Features
  • Bluetooth
  • Touchscreen
  • Notifications from phone
  • 60+ exercise modes
  • Heart rate
  • Step counting (via accelerometer and gyroscope)
  • Sleep tracking (via accelerometer and gyroscope)
  • Blood oxygen saturation
  • GPS
  • Compass
  • Barometer
  • Stopwatch
  • Timer
  • NFC payment (only in China)
  • Bluetooth (including low energy)
  • Touchscreen
  • Notifications from phone
  • 80+ exercise modes
  • Heart rate
  • Step counting (via accelerometer and gyroscope)
  • Sleep tracking (via accelerometer and gyroscope)
  • Blood oxygen saturation
  • GPS
  • Compass
  • Barometer
  • Stopwatch
  • Timer
  • NFC payment (not in Europe nor US)
  • Ambient light sensor
  • Skin temperature sensor
  • Electrocardiogram (ECG, medically certified)
  • Blood pressure (medically certified)
  • Ambulatory blood pressure monitoring (ABMP)
  • Arterial stiffness
  • Qi wireless charging
  • Speaker and microphone
  • ~2.5Gb of music storage
    (can be played to any Bluetooth speaker as well as
    on the watch speaker)

I’ve highlighted three things above because I think they especially stand out: firstly, the ambient light sensor is a very simple addition, yet it means that the display can go full brightness outdoors thus curing one of my biggest problems with the Amazfit – I couldn’t make out its screen at all in sunshine, which meant blindly tapping at the screen to start an outdoor cycle, which was annoying when I mistapped and it didn’t start the exercise recording. That light sensor goes the other way too – as I live in a cold place, I usually have a sleeved top on which covers the watch. The darkness means it can dial down the brightness of the always on display, and I get nothing like the hit to battery life that all the warning messages from Huawei claim when you turn on the always on display in the settings.

The second big standout thing in my opinion is the Ambulatory blood pressure monitoring (ABMP). If you have high blood pressure like me, you will be aware that you have to take your blood pressure at the same times of day every day for a week to get a reliable sample. This has two big problems:

  1. It’s rare you’ll have the time to take your blood pressure when you’re stressed e.g. in the day job. So you’ve no idea what the effects of your day job are on your blood pressure.
  2. You have no idea what your sleeping blood pressure is.

I mentioned above that the blood pressure monitoring was the thing which swung me to the D2, but I ought to explain why as it won’t be obvious: to measure your night time blood pressure, you basically have to sleep with two arm cuffs on, and the machine will pump both every thirty minutes and choose the one which you’re not lying upon. This means tubes going all around you, never mind the discomfort of the arm cuffs, so you don’t sleep particularly well and unsurprisingly then your sleep blood pressure reading is way off. This watch promised peaceful night time blood pressure monitoring. That’s worth money, and it’s why I forked out.

Finally, the third big standout thing isn’t for people like me, but rather for people like Megan who when they go jogging, they don’t like taking their phone because it’s big and may get rained upon. If they listen to music while they run, the ability to have the watch feed music to your headphones is a killer app. Megan has to bring her phone with her to get that music supply, and she really doesn’t care for it. So for those who hate jogging with your smartphone attached like her, this is a standout feature.

Comparing the two watches side by side physically:

Obviously the Huawei is about 3x the cost of the Amazfit, so of course you’d expect more and better everything for the added cost. And while I’m not entirely sure if that added cost would be worth it for most people, for people with high blood pressure like me, it’s an absolute eye opener as we shall see next.

What I didn’t know until now about my high blood pressure

Firstly, I should mention that at the time of writing, Gadgetbridge has no support for:

  • Blood pressure
  • ECG
  • Arterial stiffness

The latter two don’t enable themselves on the watch unless you first pair the watch with Huawei’s app so it can do a region check, and turn on the features permitted in your region. After that you can do the auth key extraction dance from the Huawei cloud as per Gadgetbridge wiki instructions, and you’re good to go.

There are active open tickets for supporting these in Gadgetbridge, and there is a PR implementing blood pressure recording so I don’t doubt these things will get supported in time. However, for now, you’ll have to manually transcribe from the watch into a spreadsheet which isn’t too painful, and the watch does have a pretty good GUI:

Once it’s into a spreadsheet, you can graph it! Here is most of a ABMP measurement done on the 1st September where blood pressure was measured every thirty minutes at night time, and every forty-five minutes at day time:

As it was before Megan’s birthday, I was still drinking alcohol and due to unemployment and the amazing weather this summer, I’ll admit I was drinking alcohol most days. I didn’t drink alcohol on the 1st until just before bed, so what you’re seeing is the effects of alcohol the night before on blood pressure the next day.

Let’s compare that to a ABMP taken on the 16th September after I’d been completely off the alcohol for two weeks:

The 16th was like a day job for me: I worked a full day in front of the computer coming up with an outhouse construction detail, which is why I chose that day for the ABMP test (the few missing results are because the watch ran low on battery, so I had to go charge it)

Comparing the two is a bit of an eye opener – on the 1st, systolic pressure was well above 125 mmHg and diastolic pressure usually above 85 mmHg. Not good! But for the 16th – despite me getting quite stressed about the outhouse detail especially getting late into the night as I was running out of time – systolic pressure was generally well under 125 mmHg. Diastolic pressure wasn’t much better than on the 1st until after I finished work for the day, then it dropped like a stone to around 75 mmHg (and the systolic to around 118 mmHg).

This is why, dear readers, that regular consumption of alcohol is not good for your blood pressure in general! Which is why I go teetotal between the summer and Christmas, and then between Christmas and the summer each year. It gives my whole system an opportunity to heal, restore balance, and basically return to health.

All the said, I have learned that a day job is about as bad for your blood pressure as drinking alcohol daily. I guess that makes sense. Combining the two is even worse for you.

The other less good news from this is that my night time blood pressure isn’t great: around 115 mmHg for systolic and 76 mmHg for diastolic. The systolic is okay, but the diastolic should be below 70. As I’ve often mentioned on here before, my diastolic blood pressure appears non-linearly related to my weight – if I go much above 76 kg, diastolic blood pressure rises markedly and gets much worse with every added kilogram.

For all these reasons I really do need to lose weight and get back to pre-covid weight. I’ve redoubled my efforts on that since the kids went back to school, and I’ve begun to notice my belt is getting a bit loose which is a good sign.

What does the inside of a smart watch look like?

I have been promising for a few posts a photo of the inside of the Amazfit watch seeing as its screen had usefully popped itself off:

I still need to disassemble it still further – I’d like to see what else is in there seeing as this will be going to recycling sooner rather than later. Doing so is on my todo list – to be honest, I haven’t had the time recently! Today I had to fully finish migrating off my old phone before I go to Spain which took longer than expected (I had to go review all the pictures I’d taken in the past three years as part of the backup and migration process), and I also had to do my British tax return as the submission final date is next month and it needs to be posted before I go to Spain. I also dealt with a bunch of other small items today and indeed have been doing so all this week … ultimately all those were higher priority and disassembling this watch legitimately can wait until I get back from Spain, so I shall do exactly that.

What things about the Huawei Watch D2 are worse than the Amazfit GTS 2 Mini?

I thought I should end with a list of things which are in my opinion definitely worse in the Huawei watch than the Amazfit watch. It isn’t a long list, and it’s all software so who knows maybe some AI crawler will report these back to Huawei or something.

As I mentioned above, I loathe them baiting and switching us on the quick release metal clasp in the strap. I think that very dishonourable of them. If the reviewer got the quick release mechanism on the bundled strap, so should the customers. Anything else is just dishonest.

I definitely find choosing a function on the Huawei more hassle than on the Amazfit. You have this scrollable menu of items three wide with a nice Mac OS type zoom effect. It looks very nice. But it still takes me longer to navigate than the simpler less fancy looking menu on the Amazfit. That makes using the watch slower and more awkward than it could be.

I don’t like how the Timer app works on the Huawei. When the time is up, it keeps buzzing at you until you shut it up, then it resets the time which means if you’re cooking something and you need another two or three minutes, you have to go back in and manually start another timer. In the Amazfit, it buzzed until you acknowledged it, then it carries on the countdown into negative time until you manually reset the countdown. If you just need another two minutes for cooking, that is very convenient and ergonomic. There is also another annoyance: on the Amazfit you can set custom timer durations and it’ll remember them. On the Huawei, you can’t, so I have to plug twenty minutes into the custom timer each and every time which is annoying. And very avoidable.

I find transferring music onto the Huawei very tedious because it can only transfer over Bluetooth, and it takes an absolute age. You therefore end up transferring music exactly once and not changing what’s on the watch. This feels avoidable.

Finally, it’s a small thing, but the list of available cards when you swipe left and right on the main display feels both less useful and worse presented than on the Amazfit. On both those watches the cards and their ordering are configurable, but the ones Amazfit lets you choose from are just … better. On the Amazfit, I’d regularly swipe left and right because what I found was useful to me. On the Huawei, I’m very much meh! The only vaguely useful card is the weather forecast, and you wouldn’t use that in Ireland for today because the rain radar is vastly more useful. It feels – again – avoidable.

What’s next?

I’ll be taking my new Google Pixel Pro 9 for a good testing while in Spain. Expect many high gamut photos on here when I return!

Theoretically, we might have some finished house foundation plans to show and talk about here not long after I return. Here’s hoping!

Ok, time to eat and then start packing my bags! I leave on train early tomorrow morning, and I should get into Madrid not too late Friday evening. See you all in a few weeks!

#watch




Sunday 14 September 2025: 16:35. Last post I mentioned that there will be coming here soon a review of my new watch a Huawei Watch D2 and my new phone a Google Pixel 9 Pro. That won’t be this post – one of my big chores these past two weeks was to replace all the proprietary cloud solutions the site is currently using with my own infrastructure. This was greatly raised in priority because I intend to run GrapheneOS on the new phone, and that lets you segment Google Play Services off into its own enclosure along with only the apps which require Google Play Services. That enclosure is closed down every time you lock the phone, so it doesn’t run when the phone is locked, which means that anything Google Play Services based (including all of Google’s own stuff) can’t spy on you when it’s not being used. That, in turn, means that you won’t get any notifications through Google Firebase which is the Google infrastructure for pushing notifications to phones. So, you need to set up your own notification push infrastructure, and there are many ways to do that.

My mobile phone push solution: Ntfy

Thankfully, the DIY solution space here is quite mature. In fact, it’s so mature that there are many competing solutions all with their own pros and cons. I ended up choosing Ntfy as the mobile push solution, though I could find absolutely nothing wrong with Gotify. I only chose Ntfy because it has many times more the userbase than Gotify, which usually means it will be more mature, more debugged, and more optimised. From a thorough reading of the bug trackers of the various solutions, and reading the source code for Ntfy, I reckoned they’d done the most empirical testing on ensuring a minimum battery consuming solution which is reliable.

Ntfy is about as simple as you could get for this solution – it does exactly one thing only. You can push text messages with optional attachments like images to a channel name of your choice. Anybody subscribed to that channel gets notified. And that is literally it – you can even configure it only use RAM for storage, which is perfect for a embedded grade computer with limited storage write cycles and a high likelihood of sudden power loss.

You can, of course, configure it with usernames and passwords and access tokens and all the other usual REST access control. You can closely configure what users can push what to which channels if you like. You can set a TLS cert on the public API endpoint so no passwords nor access tokens can get sniffed. In short, it does exactly what it says on the tin and to date I have found it ‘just works’.

Another neato feature it has is you can provide up to a three button menu with actions per button press. So, for example, you could send a picture still from the camera with a push button for ‘view the video around this time’ and another for ‘set off the alarm’. Pressing them pushes messages at other channels or performs arbitrary REST API invocations, which lets you configure simple bidirectional communication. Here it is in action:

I didn’t personally test it, but Ntfy can also optionally push to mobile phones via Google Firebase or Apple’s equivalent. So if you’re somebody running Google Play Services all the time, you can vector Ntfy via that instead of replacing Google Play Services with Ntfy. There is an open source unified push notification service called UnifiedPush, which Ntfy can also use instead on request. There are plenty of config options likely to suit most people. See below for measurements of mobile battery consumption, which is for Ntfy directly listening to a custom Ntfy broker running at the site.

Upgrading OpenWRT

To use the Ntfy Android app, you need to have the Ntfy message broker running somewhere public. I couldn’t see any good reason to not run it at the site, especially as failure to connect would then get reported and that is also something I want to know about i.e. power or internet loss at the site. And the site’s IP address is stable over time, and Eir don’t impose any restrictions on inbound connections, so you can absolutely run a public server there.

With the AI PC removed, the main sources of compute out there are the two hand built Banana Pi R3 boxes which provide the Wifi, firewall and routing. They run OpenWRT, and they’re fairly well endowed with specs: 2 Gb of RAM, four ARM Cortex A53s running at 2 Ghz, and 8Gb of MMC storage. Until this week, they were running the original very first OpenWRT firmware which was compatible with their hardware, which is a couple of years old now – after all, I started work on making those boxes back in early 2023. But that edition of OpenWRT couldn’t run Docker, and I needed Docker to get Ntfy (amongst other services) running. And of course that edition of OpenWRT was also too old to be able to self upgrade to the latest OpenWRT, so I ended up spending the entire day at the site Wednesday two weeks ago getting those two boxes onto the latest OpenWRT with everything reinstalled and reconfigured exactly as they were originally. Painful, but hopefully I’ll never have to do that again.

Now that I am on the latest OpenWRT, standard Docker Compose more or less ‘just works’. I say ‘more or less’ because you will need a custom network configuration in your compose files to make it work on OpenWRT (see below), but once I’d figured that part out, to be honest it’s been exactly the same as on a full fat Linux installation and all the Docker stuff I’ve installed has pretty much just worked. This is despite how barebones OpenWRT is in comparison to a normal Linux distro, and the very limited 6.5Gb storage partition (which runs the F2FS filesystem as it operates on MMC storage). Performance is acceptable, YABS reports as follows:

Banana Pi R3 on my siteMy colocated Raspberry Pi 5A very budget VPS I rent
LocationCork, IrelandMratín, CzechiaAmsterdam, Netherlands
CPUARM Cortex A53 @ 2.0GhzARM Cortex A76 @ 2.4GhzIntel Xeon Gold 6122 CPU @ 1.80GHz
StorageeMMC running f2fsNVMe SSD running ZFSShared NVMe SSD running ext4
YABS Single Core194772569
YABS All Cores52513681792
YABS Disk Read58 Mb/sec232 Mb/sec111 Mb/sec
YABS Disk Write65 Mb/sec239 Mb/sec112 Mb/sec
YABS Download speed930 Mbps929 Mbps1946 Mbps
YABS Upload speed102 Mbps928 Mbps2089 Mbps
YABS worst download locations (< 50% capacity)Sao Paulo (419 Mbps)Sao Paulo (146 Mbps)
Los Angeles (245 Mbps)
Tashkent (250 Mbps)
Singapore (537 Mbps)
Sao Paulo (271 Mbps)
Los Angeles (447 Mbps)
YABS worst upload locations (< 50% capacity)NoneLos Angeles (219 Mbps)Los Angeles (112 Mbps)
Sao Paulo (158 Mbps)

For a box consuming around five watts, that is decent performance. Sure, one of my Raspberry Pi 5 colocated boxes idles at the same wattage, but if you max out its cores it’ll jump to twelve watts. The RPi5 delivers approx 4x the compute for 2.4x the power, as you’d expect from a superscalar CPU. Indeed, as mentioned in the article about my colocated Raspberry Pi 5s, the benchmarks above again demonstrate that clock-for-clock the ARM Cortex A76 matches an Intel Xeon Gold 6122 CPU. The latter is faster in multicore only because it has far more memory bandwidth to avoid stalling the four CPUs.

Anyway, the four ARM Cortex A53s are plenty to run lightweight programs. What we need next is to plug the Dahua IP cameras into Ntfy. Before we get into how I solved that, here is the custom docker compose network stanza for OpenWRT, because it is not documented in an obvious nor easy to find place:

networks:
  default:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.enable_icc: "true"
      com.docker.network.bridge.enable_ip_masquerade: "true"
      com.docker.network.bridge.host_binding_ipv4: "openwrt_ip_address"
      com.docker.network.bridge.name: "docker-lan"
      com.docker.network.driver.mtu: "1500"

docker compose up will create in OpenWRT a new bridge device docker-lan. You need to adjust its settings to say it is always up, then add a new OpenWRT interface which I called dockerlan for the docker-lan bridge. Add that interface into the docker firewall group.

Finally, in the OpenWRT firewall, add the docker firewall group so lan => docker is permitted as is docker => lan. Do docker compose down to destroy the container, then docker compose up and you should find your container can now see the network.

One thing to be VERY aware of with this configuration is that ports listening within the docker container are ALSO listening on the OpenWRT LAN at the OpenWRT LAN address. If you wish to expose one of those ports to the WAN, you can add a port forward to the OpenWRT firewall. This is very convenient, but be careful as the port number space is shared between docker containers and host which makes it easy for ports and services to collide or otherwise interfere with each other.

Replacing the Dahua & Sungrow cloud integrations

Dahua provide a free of cost proprietary cloud based notification push service which can be configured as ‘full fat’ (everything goes via the Dahua cloud), ‘notification only’ (only the event notification goes to Google Firebase) and ‘camera does nothing’ (your local software e.g. Blue Iris, actively subscribes to events on each camera using Dahua’s REST API). Using the Dahua Android app, you can have the app tell the camera to push notifications to Google Firebase for the app if you don’t create a Dahua cloud account. Yes the Dahua app does have some bugs, but it works surprisingly well considering. All you need to do is to remember after a push notification to enable the Wireguard VPN before opening the Dahua app because any images or video will be fetched directly from the camera, and it usually ‘just works’ about as well as that Dahua app ever works well.

The Sungrow inverter also provides a free of cost proprietary cloud based monitoring solution, and you can opt in or out as you choose. If you opt in, your Sungrow inverter will push quite detailed metrics to your Sungrow cloud account. You can also remotely manage the inverter to a very detailed degree from the Sungrow web interface or Android app. When I say ‘very detailed’ I mean it, there are esoteric config options available there that there are no other means of accessing. Whilst all that is great, it is an enormous security vulnerability. A bad actor could cause thousands of euro of damage if they got access to that management interface. Plus, there are the usual concerns with such personal and intimate data going out into the cloud in any case.

I have used the Dahua and Sungrow cloud integrations in the nearly two years they’ve been running now simply out of convenience. But I always intended to move them onto my own, private, infrastructure and I deliberately and intentionally made sure before I bought them that it would be straightforward to integrate both into Home Assistant when the time came. Home Assistant, unfortunately, is quite resource hungry. It might plod along with these slow CPUs, but it definitely needs at least 4Gb of RAM and 20Gb of storage. As my Banana Pi boxes have 2Gb of RAM and 6Gb of storage, Home Assistant just isn’t possible on this lower end hardware.

So what else? The next most popular open source home automation software after Home Assistant is probably OpenHAB which predates Home Assistant by a few years, and has retained a slimmer resource footprint. Using their Alpine based docker image, I got it installed and working surprisingly quite well in 5.5Gb of storage. It raises the RAM usage on the Banana Pi to about 800 Mb with the rest of RAM filled with disc cached data, so it’s pretty heavy for this class of hardware. Still, it does seem to work and without much impact on the board as a Wifi router and public facing internet endpoint.

The Sungrow Inverter part was dead easy as there is a built in out of the box integration, albeit it is not initially obvious because it’s part of the ‘Modbus over IP’ module:

The values in percentages are off by a factor of 100, but that’s easy to work around in automations etc. The Sungrow integration provides both control and lots of values to read – you can, if you wish, override the Sungrow firmware configuration and have the inverter behave any way you like.

Configuring the Dahua camera OpenHAB integration

OpenHAB also comes with a Dahua camera integration, but it’s rather more effort to configure because it supports a vast range of Dahua camera models and configurations over well over a decade of multiple firmware changes. As a result, it exposes a vast number of fields, most of which will forever read NULL because your camera’s firmware and/or current configuration won’t emit that field.

Solving this took a bit of thinking cap time, but I did figure out a solution. Here is the correct way of adding a Dahua camera to OpenHAB:

  1. In Things, hit Plus => IpCamera Binding => Dahua Camera with API => Enter the IP address and username-password, Create Thing. Don’t forget to give it a suitable name!

  2. Back in Things, enter the Camera just created, choose the Channels tab, at the bottom tick ‘Add Equipment to Model’, tick ‘Show Advanced’, then ‘Select All’, then ‘Add to Model’.

  3. Go outside, and do everything to trigger everything your cameras are configured to trigger upon.

  4. In Items, enter the name of your camera in the filter. You need to examine all the input Switches – if all these are NULL, then your camera needs to be reconfigured (I suggest making sure your ONVIF username and password match your main username and password, because for some reason they are set separately).

  5. If some Items are either ON or OFF, write those down now as those are the only ones we need to subscribe to. These WILL differ based on per-camera configuration even if your cameras are all identical models.

  6. Return to Things and enter your Camera. In the Channels tab, at the bottom, click ‘Unlink and Remove Items’. This will remove all the items. You can now tick exactly the ones you wrote down before, and only subscribe to those alone.

I currently have three security cameras on the site: CamNorthWest, CamMidWest and CamSouthWest. CamNorthWest is configured with an intrusion detection boundary so it alerts if something crosses that boundary:

(in case you’re thinking that green line is down the middle of the footpath, no that is not intentional – a storm pushed the camera slightly to the left and I haven’t gotten around to redrawing the boundary)

I can tell you that for the Dahua IPC-Color4K-X, intrusions appear in OpenHAB as Field Alarm, Last Motion Type is fieldDetectionAlarm, and these fields appear to be active for this camera model, firmware, and current configuration:

  • Enable Motion Alarm is OFF.
  • Audio Alarm Threshold, set to 50.
  • Enable Audio Alarm is OFF.
  • Enable Line Crossing Alarm is ON.
    • yet Line Crossing Alarm seems to remain NULL?
  • Motion Detection Level, set to 3.
  • Poll Image is OFF.
  • Start HLS Stream is OFF but appears to go ON if you try to watch a HLS stream from OpenHAB.

CamSouthWest is the exact same model as CamNorthWest, and is also configured with an intrusion detection boundary so it alerts if something crosses that boundary:

There is one configuration difference: there is an additional post filter on intrusion that the object must be a human or a vehicle. This camera model, firmware and current configuration appears in OpenHAB as Field Alarm, Last Motion Type is fieldDetectionAlarm and:

  • Enable Motion Alarm is OFF.
  • Audio Alarm Threshold, set to 50.
  • Enable Audio Alarm is OFF.
  • Enable Line Crossing Alarm is ON.
    • yet Line Crossing Alarm seems to remain NULL?
  • Motion Detection Level, set to 3.
  • Poll Image is OFF.
  • Start HLS Stream is OFF but appears to go ON if you try to watch a HLS stream from OpenHAB.

In other words, identically to CamNorthWest, but I have manually verified that Field Alarm only triggers with humans and vehicles, unlike for CamNorthWest which also triggers for birds, cats etc.

CamMidWest is very different to the other two. Firstly, it is a Dahua IPC-Color4K-T180 so very different hardware which ships with the latest generation of Dahua firmware, whereas the previous two cameras are on the preceding generation of Dahua firmware (most of the changes are to the UI, but there are a few feature changes too). Secondly, it is configured with Motion Detection with a post filter that the object must be a human or vehicle. This appears in OpenHAB as Motion Alarm with a separate Human Alarm, and these fields appear to be active for this camera model, firmware, and current configuration:

  • Last Motion Type is motionAlarm or humanAlarm.
  • Field Alarm is NULL here.
  • Enable Motion Alarm is ON.
  • Audio Alarm Threshold, set to 50.
  • Enable Audio Alarm is OFF.
  • Enable Line Crossing Alarm is ON.
    • yet Line Crossing Alarm seems to remain NULL?
  • Motion Detection Level, set to 3.
  • Poll Image is OFF.
  • Start HLS Stream is OFF but appears to go ON if you try to watch a HLS stream from OpenHAB.

Subscribing to motionAlarm will get you lots of false positives by definition, so humanAlarm is a much better choice.

Additional fields common to all models which are read-only:

  • Last Event Data is whatever the camera did last e.g. ‘user logged out’, ‘synchronised time to NTP’ etc
  • HLS URL, but its addresses don’t seem to work?
  • Image URL, which returns a JPEG of the current view. Note this also stores a snapshot on the camera’s storage.
  • MJPEG URL, which is a MJPEG video feed of the current view.

Finally, I only really trialled it a bit so I didn’t spend much time on it, however you can create a custom dashboard for OpenHAB:

The weather forecast is simply more sensor data, so you could do rules like ‘if the batteries are low, but there will be sunshine later today, charge the EV first’ or ‘if the weather tomorrow will be heavy rain and cold, charge the thermal store to full using cheap night rate electricity; but if the weather tomorrow will be sunshine all day, only charge the thermal store up to 50%‘. There are lots of possibilities here, and OpenHAB is probably as powerful as Home Assistant at this sort of stuff, except it will happily run on your Wifi box with a five watt power budget!

Having camera alerts send a message via ntfy

There is a dedicated section in OpenHAB for rules which are variations on ‘if this (and/or this …), then that’. You can have rules be conditional on any event, time, system event (e.g. start up etc) with any arbitrary logic between them. Any programmer will find it very straightforward.

To define scripts to execute as a result of a rule, you have your choice of writing the script in Javascript, a dedicated DSL, YAML or a visual programming IDE called ‘Blockly’ which looks like this:

This lets you drag and drop chunks of connector to create a program which emits as YAML (and you can hand customise and edit that YAML at the same time, though changing the graphical representation may eat those YAML customisations sometimes). They are obviously trying to replicate VisualBasic from the 1990s, but it’s not quite that fluid nor intuitive. In particular, there is a steeper learning curve than it appears – I had to go search Google a fair few times to figure out how the drag-drop UI works in places. Above you can see a script which performs a HTTP PUT to Ntfy attaching a still from the camera, the last motion type, and an Action button to view the live video right now (which you saw appear in the Ntfy app screen shot above).

And yeah, that’s pretty much it for solving replacing the proprietary cloud services entirely. OpenWRT lets you firewall those devices off from the internet so you’re sure they can’t get out, but both Dahua and Sungrow let you toggle off the cloud push in their config as well. For now, I’ve left both systems running in parallel to ensure everything is working perfectly, and after a week both systems issue alerts in perfect synchronicity without one being delayed from the other etc.

Mobile phone battery consumption

I left the Ntfy app running on the Google Pixel 9 Pro for a day whilst doing nothing else with it, and according to the Google battery status ‘< 1%’ of battery gets used by the Ntfy Android app despite constantly running in the background. I then set up a timer to push messages at it to test its reliability. Every message was received, and it now reckons 1% of battery was consumed. This seems very acceptable, though this testing was exclusively done on Wifi.

I’ve since moved onto the Google Pixel 9 Pro as my main phone, so taking it out of the house and away from Wifi (also: when I’m in bed, the Pixel loses all Wifi and uses LTE – it has noticeably worse Wifi than the Samsung Galaxy S10 which sitting right beside it keeps a stable Wifi connection). Averaged over the past four days:

  • From the system perspective: 76% of battery went on the mobile network, 8% on Wifi, 6% on screen, 3% on Camera, 3% on GPS.
  • From the apps perspective: 23% went on the web browser, 12% went on WhatsApp, 3% went on Ntfy, 3% went on the Launcher, 3% went on the streaming radio app.

Between 4am and 6am when I was sleeping at my Dad’s house and it was 100% on LTE and I definitely wasn’t using the phone:

  • From the system perspective: 54% of battery went on the mobile network, 26% on Wifi, 20% on GPS. Ouch!
  • From the apps perspective: 22% went on WhatsApp, 15% went on the web browser, 8% went on Ntfy, everything else was < 1%. Also ouch!

It isn’t widely known that Meta supply an edition of WhatsApp which doesn’t require Google Play Services (here). This works by keeping an open web socket to Meta’s servers so it can receive push notifications. As you can see above, their implementation is nearly three times worse for power consumption than Ntfy’s, so I think I was right that Ntfy would have been heavily debugged and tweaked due to its large user base.

This past weekend I really needed WhatsApp to definitely be working, so I gave it unrestricted background operation permissions. As I won’t need it to definitely be working these next few days, I’ve enabled background usage optimisation going forth and we’ll see what that might do about WhatsApp chewing down so much battery.

The amount spent on Wifi when there is no known Wifi available is disappointing. It obviously is constantly scanning. I wonder if that is related to the high GPS consumption? Something might be constantly requesting the current location, which then uses the current Wifi environment and GPS. I found that the weather app was refetching the weather every ninety minutes – I’ve now changed that to every six hours, and we’ll see if that improves things.

Finally, I’ll also need to do something I think about the web browser as that power consumption is unacceptable, and I’ve now removed GPS access permissions from everything bar OsmAnd and Google Maps. I’ll keep monitoring battery consumption and keep at the tuning – the default battery consumption for GrapheneOS is one of the biggest complaints by new users on its issue tracker, but the old hands say a great deal can be done by tweaking configuration, so we’ll see how that goes.

What’s next?

I expect to write the article comparing the Google Pixel 9 Pro and my previous Samsung Galaxy S10 when I get back from my trip to Spain in October. Whilst in Spain, I intend to fully test the new phone, see how it holds up. I may get the article comparing my new watch to my old watch done this week, but I have a very full week ahead of me this week, so it’s entirely possible it’ll have to get pushed to after Spain.

In fact, I’ve been working so hard on burning down the chores and todo lists that Megan actually ordered me to take a lie in last week, which I turned out to have sorely needed as I had been getting six or seven hours of sleep nightly as I’ve been working so hard. I guess that’s the fortunate thing about unemployment – motivating yourself to burn through your own personal todo list is a lot easier than motivating yourself to do somebody else’s todo list for money. Because your own todo lists are worth more to you personally, you find yourself really going at them all day long every day without even often pausing for food.

On the one hand, long may the todo list burndown last! On the other hand, restoring financial income would be rather handy too.

#house




Tuesday 9 September 2025: 21:19. I have been and returned from the WG14 meeting in Czechia week before last, and this week and a bit it has been very much iterating through all the chores I couldn’t start until the summer was over and the kids were back in school, and thus freeing up the contiguous blocks of uninterrupted free time you need for certain chore items. Forward progress has been helped by the weather which has been unpleasant, but there have been windows of a few hours of okay weather – last Friday I went up Bweengduff on the electric bike and I had a hoot going fast along the gravel tracks as with the last two times. But despite the three layers of clothes, I was cold from the wind and I had to go hide under a tree twice due to heavy rain. In short, as much as I had fun and what a bike! is that Fiido T2 Longtail, it wasn’t as pleasant as it would have been were it warmer and less moist. C’est la vie I suppose – in fairness, that summer just passed was one of the nicest Irish summers I can remember weather-wise and when I was looking at the site solar panel power history for the last few months, it was very clearly unusually bright throughout. Almost every day, the batteries were back to full by 11am!

You might remember that this summer I was trialling a decade old PC running a decade old 8 Gb nVidia Tesla P4 AI inferencing accelerator card bought second hand off Aliexpress. Its purpose was to analyse the three security camera feeds on the site to see how much better a job it could do over the AI built into the cameras. I ran it for exactly two months, and my prediction that the 28 Tb hard drive would store a bit more than three months of video was spot on. I manually reviewed all the alerts the AI recognised during those two months and it is markedly less prone to false positives than the camera’s built in AI – which is to be expected. Still, the specific security camera specialist AI model I was running still got confused by ravens in particular – those like to flap around on the roof of the office in groups sometimes – and it regularly thought those were people (which the camera based AI also gets confused by). The PC AI did not get confused by cats – unlike the camera AI – and as expected it could see people much further away than the camera AI, whose internal resolution for the AI is surely quite coarse (and far below the camera’s 4k resolution). I think with a bit of tweaking and fiddling that this solution is a marked improvement, albeit with an added ~80w power cost, which is almost exactly double the site’s current power draw, and which is why I can’t afford to run it outside the long summer days. The watt meter that I fitted read 19.6 kWh before I turned everything off – that seems absurdly low when 80 watts should result in ~58.4 kWh per month, but maybe that watt meter wraps at 100 kWh and then it would make sense?

Last post I mentioned that there will be coming here soon a review of my new watch a Huawei Watch D2 and my new phone a Google Pixel 9 Pro. That won’t be this post – one of my big chores this week was to start replacing all the proprietary cloud solutions the site is currently using with my own infrastructure. This was greatly raised in priority because I intend to run GrapheneOS on the new phone, and that lets you segment Google Play Services off into its own enclosure along with only the apps which require Google Play Services. That enclosure is closed down every time you lock the phone, so it doesn’t run when the phone is locked, which means that anything Google Play Services based (including all of Google’s own stuff) can’t spy on you when it’s not being used. That, in turn, means that you won’t get any notifications through Google Firebase which is the Google infrastructure for pushing notifications to phones. So, you need to set up your own notification push infrastructure, and there are many ways to do that.

That will however be the next post here, because there is something else which needs doing to this website implementation before I can fully move onto my new Google Pixel 9 Pro: what to do about HDR photos.

The sorry state of HDR photos in 2025

Last October I transitioned the videos shown in posts on this website to self-hosted, rather than hosted on YouTube. This was made possible by enough web browsers in use supporting AV1 encoded video (> 95% at the time) that I could reencode HDR10+ videos captured by my Samsung S10 phone into 1080p Full HD in ten bit Rec.2020 HDR with stereo AAC audio at a capped bitrate of 500 Kb/sec with – to be honest – quite spectacular retention of fidelity for such a low bitrate. One minute of video is only 3.8 Mb, so I was in the surprising situation that most of the JPEG photos hosted here are larger than a minute of video!

Video got widespread wide gamut (HDR) support quite a long time ago now. Not long after DCI-P3 and Rec.2020 were standardised around 2012, HDR video became widely available from about 2016 onwards, albeit at the time with huge file sizes (one friend of mine would only watch Blu-ray HDR content or better, so every movie he stored was a good 70Gb each! That uses up a lot of hard drives very quickly …). Video games followed not long after, despite Microsoft Windows having crappy HDR support then and indeed still now today. Then, basically everybody hit pause for a while, because for some reason nobody could agree on how best to implement HDR photos. It didn’t help that for a long time, Google was pushing WebP files, Apple was pushing HEIC files, and creatives were very keen on JPEG XL which is undoubtedly the best technical solution to the problem (but in my opinion sadly likely to go the way of Betamax). Problem was – to be honest – none was sufficiently better than JPEG to be worth upgrading a website, and I like almost everybody else didn’t bother with moving on from JPEG, in the same way everybody still seems to use MP3 for music because portability and compatibility trumps storage consumption.

It didn’t help that implementations of WebP and HEIC only concentrated on smaller file sizes, which nobody cared about when bandwidth and storage costs kept exponentially improving. For example, the camera in my Samsung S10 does take photos in HDR, but you need to have it save them in RAW format, and then on a computer convert the RAW format into a Rec.2020 HDR image format to preserve the wide gamut. That was always too much hassle for me to bother, especially as for video it natively records video in Rec.2020 HEVC in the first place. What’s weird about that phone is that Samsung stores photos in HEIC format, which is HEVC compression under the bonnet and is absolutely able to use Rec.2020 gamut. But Samsung very deliberately uses a sRGB colour space, which at the time they claimed was for better compatibility (despite that almost nothing but Apple devices support HEIC format images natively). The Samsung phone does convert those HEIC files into JPEG on demand, so perhaps using the same SDR gamut as JPEG was just easier, who knows.

That Samsung S10 phone was launched in 2019, the same year as the AVIF format. The AVIF image format stores images using the AV1 video codec much in the same way as HEIC stores images using the HECV video codec. Like HEIC, if your device has hardware acceleration for AV1 video, this can accelerate the rendering of AVIF images, which is important as these formats are computationally expensive to decode. Unlike HEIC though, AVIF did see widespread take up by the main web browsers and platforms, with everybody supporting AVIF by the start of 2024. As of the time of writing, according to https://caniuse.com/avif 95.05% of desktop web browsers currently in use support AVIF and 97.89% of mobile web browsers do so. While WebP support is even more widely supported again, HDR in WebP support is not a great story. In short, AVIF is as good as it gets if you want to show HDR photos on websites.

Or is it? After many years of Google banging the WebP drum and not finding much take up, obviously another part of Google decided to upgrade the venerable JPEG format. Very recent Google Pixel Pro’s can now optionally save photos in ‘Ultra HDR JPEG’ format, which is a conventional SDR JPEG but with a second ‘hidden’ greyscale JPEG describing a ‘gain map’ so a Rec.2020 gamut image can be reconstructed from the SDR data. As the human eye isn’t especially sensitive to gamut at those ranges (which is why they were omitted from SDR in the first place), this does work for added file size, and it has the big advantage of backwards compatibility because they are absolutely standard JPEGs to code which doesn’t know about the gain map. The wide gamut is only used if your image processing pipeline understands gain map extended JPEGs.

Despite that the gain map extended JPEGs were standardised as ISO 21496-1 and all the major vendors have agreed to support them, due to being standardised only this year support in existing tooling for gain map extended JPEG is extremely limited. There is the official Google reference implementation library and the few bits of software which have incorporated that library. AVIF also supports gain map extended SDR images, but it is very hard currently to create one as tooling support is even worse than for JPEGs. Web browser support for gain map extended AVIF is also far more limited, with only year 2025 editions of Chrome based browsers supporting it. That said, in years to come gain map extended AVIF will be nearly as widely supported as AVIF, and with the claimed much reduced file size they could be the most future proof choice.

Why all this matters is that this website is produced by a static website generator called Hugo and as part of generating this website it takes in the original high resolution images, and generates many lower resolution images for each, and then emits CSS to have the browser choose smaller images when appropriate. There is absolutely zero chance that Hugo will support gain map extended JPEGs any time soon as somebody needs to write a Go library to support them. So image processing support for those is years away.

It’s not much better in the Python packaging space either – right now I can find exactly two PyPi packages which support gain map extended JPEGs. Neither seems to offer a lossless way of converting from gain map extended JPEG to gain map extended AVIF.

Converting losslessly between gain map extended image formats

It won’t be obvious until I explain it: rendering HDR as somewhat accurate SDR is hard at the best of times. Usually you have to supply a thing called a ‘tone map’ with your HDR video to say how to render this HDR as SDR. This is where colour profiles and all that complexity comes in, and if you’ve ever seen HDR video content have all wrong colours, that’s where things have gone wrong somewhere along the pipeline.

Something not obvious above is that gain map extended JPEG doesn’t come with a tone map, nor a colour profile. The software which creates the gain map extended JPEG chose as perfect as possible SDR representation and HDR representation. It emits the SDR image with a delta of how to approximate the HDR image from that SDR image.

The problem is that all the current image processing tooling thinks in terms of (a) here is your image content data (b) this is what the colours in that image content mean. If I render just the SDR portion of the gain map extended JPEG into a RAW format, I lose the HDR side of things. But the same goes if I render the HDR portion, then I lose what the device thought was the best SDR representation.

Therefore, if you want to convert between gain map extended image formats without losing information, right now you need to emit the gain map extended JPEG firstly in raw SDR and then in raw HDR. You then need to tell your AVIF encoder to encode that raw SDR with a gain map using the raw HDR to calculate the gain map.

The tool in libavif to do that wasn’t working right as of a few months ago, and invoking all this tooling correctly is very arcane. Luckily, this exact problem affects lots of people, and I found a fork of Google’s libultrahdr which adds in AVIF emission. That fork is literally being developed right now, its most recent commit was two days ago.

Gain map extended JPEG to gain map extended AVIF via libultrahdr

Due to its immature state, right now that fork of libultrahdr cannot create a gain map extended AVIF directly from a gain map extended JPEG, so you need to traverse through a raw uncompressed file.

That’s fine, but I was rather surprised to (a) see how very long it takes this tool to create a gain map extended AVIF – but let’s assign that to the ‘this is alpha quality code’ category – and (b) that the gain map extended AVIF file is twice the size of the original gain map extended JPEG.

That produced a ‘huh?’ from me, so I experimented some more:

  • A gain map extended JPEG from an input gain map extended JPEG is also twice the size of the original.
  • That suggested dropping quality settings would help, so I reduced the quality of the gain map to 75% leaving the SDR picture at 95%: now the AVIF file is the same size as the original JPEG.
  • Dropping quality for both sides to 75% yields a file 60% smaller than the original JPEG.

I can’t say I’m jumping up and down about a 60% file size reduction. AVIF is normally a > 90% file size reduction over JPEG.

In any case, this fork of libultrahdr can’t do resizing, so in terms of helping me solve my photo downsizing problem for Hugo, this isn’t much help.

Gain map extended JPEG to gain map extended JPEG via ImageMagick

The traditional Swiss army knife for doing stuff with images is ImageMagick, and if you’re willing to compile from source you can enable a libultrahdr processing backend. There is good reason why it isn’t turned on by default, because the support for gain map extended images is barely there at all.

I’m about to save you the reader many hours of trial and error time on how to resize a gain map extended JPEG using ImageMagick built from source, and I suspect had I not spent plenty of time messing around with libultrahdr this wouldn’t have come to me eventually.

Firstly, extract the SDR edition of the original gain map extended JPEG into a raw TIFF applying any resizing you want to do. Make SURE you turn on floating-point processing for all steps, otherwise you’ll see ugly gamut banding in the final output:

magick -define quantum:format=floating-point \
  PXL_20250908_164927689.jpg \
   -resize 10% test_sdr.tif

Now extract the HDR edition, but be aware that the raw TIFF generated is not even remotely correct, but it won’t matter because you’re preserving the original information in the gain map extended JPEG:

magick -define quantum:format=floating-point \
  -define uhdr:hdr-color-gamut=display_p3 -define uhdr:output-color-transfer=hlg \
  uhdr:PXL_20250908_164927689.jpg \
  -resize 10% test_hdr.tif

Now here comes the non-obvious part: here is how to tell ImageMagick to feed the raw SDR and HDR TIFFs into libultrahdr to create a new, reduced size, gain map extended JPEG:

magick -define quantum:format=floating-point \
  -define uhdr:hdr-color-gamut=display_p3 -define uhdr:hdr-color-transfer=hlg \
  -define uhdr:gainmap-quality=80% -quality 80 \
  \( test_sdr.tif -depth 8 \) test_hdr.tif \
  uhdr:test2.jpg

The 80% quality setting was found to produce an almost identically sized output to the original if output at identical resolution. My Macbook Pro M3 will display 100% of DCI-P3 but only 73% of Rec.2020. Zooming in and out, the image detail at 80% is extremely close to the original, but the colour rendering is very slightly off – I would say that the output is ever so slightly more saturated than the original. You would really need to stare closely at side by side pictures to see it however, at least on this Macbook Pro display. I did try uhdr:hdr-color-gamut=bt2100, but the colour rendering is slightly more off again. libultrahdr supports colour intents of (i) bt709 (i.e. SDR) (ii) DCI-P3 (iii) bt2100 (i.e. Rec.2020), so display_p3 I think is as good as it gets with current technology.

So we are finally there: we now have a workable solution to the Hugo image processing pipeline which preserves HDR in images! I am a little disappointed that gain map extended AVIF with sufficiently smaller file sizes isn’t there yet, but I can surely revisit solving this in years to come.

Let’s see the money shots!

So, here we go: here are the first HDR photos to be posted on this site. They should retain their glorious HDR no matter what size the webpage is (i.e. the reduced size editions will be chosen, and those also have the HDR):

I thought lest the difference that the HDR makes isn’t obvious enough, here is a HDR and SDR edition side by side. If your display is able to render HDR, this should make the difference quite obvious:

All that took rather more effort to implement than I had originally expected, but now it’s done I am very glad with the results. Web browsers will remain unable to render HDR in CSS for a while yet, though here’s trying the proposed future HDR CSS:

This may have a very bright HDR yellow background!

… and no web browsers currently support HDR CSS, at the time of writing.

When HDR CSS does land, I’m not sure if I rework all the text and background to be HDR aware or not. I guess I’ll cross that bridge when I get to it.

For now, enjoy the new bright shiny photos!




Click here to see older entries


Contact the webmaster: Niall Douglas @ webmaster2<at symbol>nedprod.com (Last updated: 2019-03-20 20:35:06 +0000 UTC)