Welcome to ned Productions

by . Last updated .

Welcome to ned Productions (non-commercial personal website, for commercial company see ned Productions Limited). Please choose an item you are interested in on the left hand side, or continue down for Niall’s virtual diary.

Niall’s virtual diary:

Started all the way back in 1998 when there was no word “blog” yet, hence “virtual diary”.

Original content has undergone multiple conversions Microsoft FrontPage => Microsoft Expression Web, legacy HTML tag soup => XHTML, XHTML => Markdown, and with a ‘various codepages’ => UTF-8 conversion for good measure. Some content, especially the older stuff, may not have entirely survived intact.

You can find the posts here replicated onto Diaspora, if you prefer to subscribe there instead.

Latest entries: Feed icon

Monday 8 March 2021: 20:44. I had a pretty bad week last week at work. My main development workstation had, on the preceding Friday, crashed taking my development VMs with them, so I spent the weekend before last reinstalling Ubuntu 20.04 with ZFS-on-Root (my standard Linux setup for five years now!), moving the work codebase onto that shared with Windows over Samba, with the intent of building for Linux within the VM, and building for Windows from the Samba share. This was a big divergence from my previous setup of Windows Subsystem for Linux v1 doing the builds for both Linux and Windows, and then I’d run the Linux executables over a Samba share. There is nothing wrong with my former setup for smaller codebases, but as the work codebase approaches 150k LOC, WSL v1 based Linux builds are getting unwieldy slow. And WSL v2 is the same as a Linux VM, except the file system is shared by 9p rather than Samba, and 9p is very considerably slower than Samba, so you’re much better off configuring your own Linux VM and Samba installation and tuning the snot out of Samba.

Anyway, all of last week my developer workstation kept locking up, losing me work in progress. I tried relocating the NVMe SSD (a Samsung 970 Pro) into a new M.2 socket, and since then it appears to be reliable again. But that’s water under the bridge, what I’m here to talk about now is how I fixed Visual Studio 2019 not building reliably over a Samba share, because absolutely nobody else seemed to find a solution to this oft reported problem (well apart from this guy here who found a workaround to a related but different problem which has the same manifestation as mine).

Firstly, I am not building into the Samba share. I create a build directory on Windows, and tell cmake to populate that Windows build directory using a git worktree located on a mapped network drive M:\ which is the Samba share of the git worktree in the Linux VM \\kate-linux. As the build never writes into the source worktree, Samba is only being used here for reads only, and so thanks to opportunistic locking (oplocks), Windows aggressively caches the source tree in Windows and build performance is pretty close to native speed.

Except, it’s not quite reliable. 99.9% of the time it works fine. But occasionally MSVC doesn’t find some header file, or Visual Studio refuses to save a file, and if you look in the directory it is creating lots of orphaned temporary files from the failed saves. The problem is much worse if you use --parallel with cmake --build . --config Debug where MSVC will fail to find lots of header files, sufficiently so that you don’t get a usable build. Initially I thought this was purely a MSVC/Visual Studio problem, as it only ever appeared there, not helped by all the google searches reporting the same problem and almost all also mentioned MSVC/Visual Studio. But I also noticed that occasionally executing git from Windows where the git repo was on the mapped network drive would fail too with messages such as:

fatal: update_ref failed for ref 'HEAD': cannot lock ref 'HEAD': unable to create lock file non-directory in the way

… and other messages suggesting that the network share was being racy with respect to changes on the network share.

My initial thought was that Samba must be misconfigured, even though it was pretty much with default config, and Ubuntu 20.04’s Samba is v4.11.6 which to my best knowledge, has no known major bugs and its default config is pretty optimal for performance, unlike earlier versions before Samba v4. I spent all last week when I was waiting on Linux build trial and error A-B testing various network and Samba configurations, alas to no avail.

This weekend passed, and today Monday morning I had a bit of a brainwave: What if Samba is absolutely fine, and it is Windows 10 which is the cause?

That led me to Microsoft’s documentation page about SMB2 Redirector Caches which documents three registry settings to fiddle with. It turns out that setting these parameters in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters fixes all my MSVC failed-to-find-file, Visual Studio 2019 failed-save-edited-file, and git failed-to-checkout-branch problems:

  1. DirectoryCacheLifetime = (DWORD) 0
  2. FileNotFoundCacheLifetime = (DWORD) 0
  3. FileInfoCacheLifetime = (DWORD) 0

After you have set these using regedit, Run services.msc, find the Workstation service and restart it. To verify it’s working, open powershell with Administrator privileges and do Get-SmbClientConfiguration:

ConnectionCountPerRssNetworkInterface : 4
DirectoryCacheEntriesMax              : 16
DirectoryCacheEntrySizeMax            : 65536
DirectoryCacheLifetime                : 0
DormantFileLimit                      : 1023
EnableBandwidthThrottling             : True
EnableByteRangeLockingOnReadOnlyFiles : True
EnableInsecureGuestLogons             : False
EnableLargeMtu                        : True
EnableLoadBalanceScaleOut             : True
EnableMultiChannel                    : True
EnableSecuritySignature               : True
ExtendedSessionTimeout                : 1000
FileInfoCacheEntriesMax               : 64
FileInfoCacheLifetime                 : 0
FileNotFoundCacheEntriesMax           : 128
FileNotFoundCacheLifetime             : 0
KeepConn                              : 600
MaxCmds                               : 50
MaximumConnectionCountPerServer       : 32
OplocksDisabled                       : False
RequireSecuritySignature              : False
SessionTimeout                        : 60
UseOpportunisticLocking               : True
WindowSizeThreshold                   : 8

Note the zero values for the parameters we forced to zero, but large MTUs remain on, oplocks are on, and multichannel is on.

SMB Multichannel is probably the only major Samba performance enhancing feature not enabled by default in Samba v4. This is because it was buggy until recently, but now it’s working very well. SMB Multichannel lets file transfers multiplex over multiple TCP connections, so just like with Download Accelerators on the internet, you can multiply a per-TCP-connection maximum several fold over multiple connections, thus greatly increasing transfer rates. This isn’t particularly important for many small files like during a C++ compile run, but if you have multiple threads all accessing a single Samba share, with SMB Multichannel those threads actually see some concurrency whereas without SMB Multichannel, they all get funnelled through a single TCP connection with a global mutex. So, for a parallel build like what Visual Studio now does by default, SMB Multichannel is a big gain.

You can see if your Hyper-VM Linux and your Windows installation are already employing SMB Multichannel using this command in an Administrator privileged PowerShell:

Get-SmbMultichannelConnection -IncludeNotSelected

Server Name Selected Client IP     Server IP       Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable
----------- -------- ---------     ---------       ---------------------- ---------------------- ------------------ -------------------
kate-linux  False 7                      2                      False              False
kate-linux  False 11                     1                      False              False
kate-linux  False 11                     1                      False              False
kate-linux  False 7                      1                      False              False
kate-linux  True 7                      1                      True               False
kate-linux  False 11                     2                      False              False

If it prints nothing, SMB Multichannel is NOT being employed.

If your Samba is after v4.13, it should autodetect when your network setup is RSS capable on its own. If both sides can do RSS, enabling SMB multichannel is as simple as adding this into your smb.conf:


server min protocol = SMB3
server multi channel support = yes

Note you need to reboot your Linux VM and then your host Windows machine before this takes effect.

If your Samba is before v4.13, you will need to either force RSS on (ideal as it can parallelise according to CPUs in your machine) or assign more than one network adapter to both your VM and your host on the Hyper-VM bridge (not ideal, as max concurrency is the number of multiple NIC pairs between Linux and Windows). Here is how you force Samba to advertise support for RSS and RDMA:

interfaces = ";if_index=1,capability=RSS,capability=RDMA,speed=10000000000"

Obviously, you will need a static IP for your Linux VM for this to work, and you need to enable RSS in the virtual 10Gb NIC and on the Hyper-VM bridge you are using.

I left RDMA enabled in there too, though it only makes sense on real hardware with a sufficiently capable real NIC on a real server. Obviously if you do have such capable hardware, you can sustain 10Gb/sec on a 100Gbit link with 256Kb per i/o @ QD8, or 2Gb/sec on a 100Gbit link with 4Kb per i/o @ QD200. Over a software emulated switch and NIC, the SMB Multichannel only mainly increases concurrency for both host and VM, helping ameliorate the VM<=>Host latency.

Finally, the only other settings which Samba v4 doesn’t currently enable which might help are:

use sendfile = yes

TCP_NODELAY is already on by default in Samba v4, but IPTOS_LOWDELAY is not. This might improve performance a bit given that now Windows does no caching of metadata whatsoever given the registry changes above. And use of kernel sendfile() to zero copy transmit files is off by default, for some reason, so turning it on might reduce CPU cache loading a little.

Hopefully I helped other people now reading figure out the solution to what has been a very frustrating week for me in getting Visual Studio/MSVC to reliably build a Linux Samba share supplied git worktree.


Sunday 21 February 2021: 00:48. We currently live in a former council house near Mallow, Cork which suffers rather from damp and mould, though certainly not as bad as in many Irish houses, especially ones in the humid south. This gives us all constant coughs, plus our sinuses are always congested, and we often don’t sleep as well as we might because we wake up early due to being unable to breathe due to being all bunged up. I have a dehumidifier which is excellent at drying out a room sufficient that one sleeps very well, but it is (a) noisy (b) expensive on electricity to run. So I’ve been looking for something which could clean out some of the crap which gets into the air, is quiet enough it can run 247, is cheap enough that I can deploy it throughout the house permanently, and at least reduces the severity of what ails us from this house.

After a bit of research, I eventually settled on the Xiaomi Air Purifier 3C for €90 delivered within the EU from Aliexpress, buying one for every room in the house. These are the cheapest edition of the third generation of Xiaomi’s very popular air purifier range widely used in the big cities of China, India and Poland to reduce the fine particulate air pollution inside your home to less toxic levels (for every 10 μg/m3 increase of PM2.5 in your air, there is a +36% increase in lung cancer, and 7.5% of all heart attacks are due to PM2.5). My main attraction to them was that they can be controlled over wifi without a cloud connection (or indeed any access to any other network) by python-miio, so I can script individual behaviours for each such as when to go fast (daytime), and when to go slow (night time). Here’s my kitchen 3C in action:

They’re a very simple design, particularly the low end 3C model which has almost no onboard intelligence, unlike the more expensive models (one doesn’t need onboard intelligence if one is scripting them). They consist of a plastic base, a replaceable filter which fits into it, a quiet and high efficiency variable RPM AC turbine fan, a LED display, and a very low end ARM CPU with 2.4Ghz 11n Wifi and Bluetooth (the CPU is so low end that ping times are in the 400-700ms range, and REST API calls take 1.5 seconds or so). The turbine fan will spin at any RPM you choose between 300 and 2200, in 10 RPM increments. Finally, there is a laser-based PM2.5 sensor, but on this cheapest model no temperature nor humidity sensors.

As you can see below, my 3C came with the grey filter, which is claimed to meet the EN 1822 H13 (HEPA) standard specification removing at least 99.95% of all particulates equal or exceeding 0.3 microns in size (i.e. PM0.3). Note that anything less than 0.3 microns will get filtered by almost any kind of filter, because smaller particles bounce around a lot and get trapped by just about any density of fibre – therefore, these purifiers readily scrub the air of covid-19 and most other viruses too. Inside the wood fibre HEPA filter there is an additional activated carbon filter, which might soak up some odours and Volatile Organic Compounds (VOCs).

Despite this model’s simplicity, no other air purifier comes remotely close to the feature set for €90 delivered. In fact, you’d probably need to multiply by six if you want a well known brand such as Blueair with a similar feature set. Obviously, nobody is expecting that a Blueair model for €550 isn’t going to beat one of these for €90, but I’m fairly sure that six of these would handily beat a single Blueair, and do it quieter with cheaper replaceables, even if the Xiaomi filter isn’t as good as it claims.

Of course, by far the most important part of any air purifier is the filter itself, partly because it determines whether the device will be of any use or not, but also because they tend to be the expensive consumable. As with all things Chinese, there are a lot of clones of Xiaomi filters, but I believe I screened those out. Incidentally, I discovered on a chinese forum actual numbers for the claims by Xiaomi for their different filters, I don’t believe these are easily findable in English, so I’m going to list them here for your (and my later) convenience:

TypeModel EfficiencyColourRFIDDevice supportPrice on Aliexpress delivered to EU
EPA Economical M2R-FLPEN 1822 E12 (99.5%) BlueYes2/2C/2H/2S/Pro/3C/3H€31
EPA Anti-bacterial MCR-FLGEN 1822 E12 (99.5%) Pink PurpleYes2/2C/2H/2S/Pro/3C/3H€33
EPA Anti-formaldehydeM1R-FLPEN 1822 E12 (99.5%) GreenYes2/2C/2H/2S/Pro/3C/3H€36
HEPA M8R-FLHEN 1822 H13 (99.95%)GreyYes2/2C/2H/2S/Pro/3C/3HUnavailable

Each filter has a RFID chip which tracks how many hours it has been used for, and you will be pestered to replace it after about six months of continuous usage. The EPA grade filters are, with some difficulty to find at that price, available for as little as €31 each delivered within the EU. I could not find the HEPA filters for sale, though because they are just new on the market, you can barely buy them in China either yet, so that situation may improve within six months. Assuming they are at least €45, that means they were half the cost of buying the whole purifier!

As the air in Ireland is extremely clean from a PM2.5 perspective, auto mode based on the sensor reading isn’t useful here. That afflicts a high end brand such as Blueair just as much as this Xiaomi unit. Therefore you need to manually override them to run faster all the time in order to clean the air of mould spores, and that’s where the scriptability over wifi comes in, because I don’t really want to have to manually go around the house adjusting these manually. Here is the script I wrote to control them, you can obtain the device token using the instructions from Home Assistant (I set up a separate WiFi SSID on a VLAN, used the Xiaomi Home app to register the devices, then used the Xiaomi Cloud Tokens Extractor to get the tokens, then closed off all access between the VLAN and any other network including the internet. This script connects into the VLAN using a source IP spoofing NAT).


from miio import airpurifier_miot
from miio.exceptions import DeviceException
import time

class Purifier:
  def __init__(self, ip, token, name):
    self.ip = ip
    self.__token = token
    self.name = name
    self.__inst = None
    self.available = False
  def update(self):
      if self.__inst is None:
        self.__inst = airpurifier_miot.AirPurifierMB4(self.ip, self.__token)
      status = self.__inst.status()
    except (airpurifier_miot.AirPurifierMiotException, DeviceException) as e:
      print("Failed to connect to", self.ip, "(" + self.name + ") due to", repr(e))
      self.available = False
    self.available = True
    self.mode = status.mode
    self.powered_on = 'on' in status.power
    self.air_ppm = 0 if status.aqi is None else status.aqi
    self.led_brightness = int(status.led_brightness_level)
    self.current_rpm = int(status.motor_speed)
    self.filter_hours_used = int(status.filter_hours_used)
    self.filter_life_remaining = int(status.filter_life_remaining)
  def __repr__(self):
    ret = 'Purifier(%s) available=%d' % (self.name, self.available)
    if self.available:
      ret+= ' powered_on=%d air_ppm=%d led_brightness=%d current_rpm=%d filter_hours_used=%d filter_life_remaining=%d%%' % (self.powered_on, self.air_ppm, self.led_brightness, self.current_rpm, self.filter_hours_used, self.filter_life_remaining)
    return ret
  def enable_display(self):
    if self.available and self.led_brightness != 8:
      print('Purifier(%s) setting display to %d, led_brightness = %d' % (self.name, 8, self.led_brightness))
  def disable_display(self):
    if self.available and self.led_brightness != 0:
      print('Purifier(%s) setting display to %d, led_brightness = %d' % (self.name, 0, self.led_brightness))
  def set_rpm(self, newrpm):
    if self.available and abs(self.current_rpm - newrpm) > 20:
      if newrpm <= 400:
        if self.mode != airpurifier_miot.OperationMode.Silent:
        print('Purifier(%s) setting silent RPM to %d, current_rpm = %d' % (self.name, newrpm, self.current_rpm))
        if self.mode != airpurifier_miot.OperationMode.Favorite:
        print('Purifier(%s) setting favourite RPM to %d, current_rpm = %d' % (self.name, newrpm, self.current_rpm))

purifiers = {
  'Kitchen' : Purifier('192.168.xxx.xx0', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'Kitchen'),
  'Master Bedroom' : Purifier('192.168.xxx.xx1', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'Master Bedroom'),
  'Kids Bedroom' : Purifier('192.168.xxx.xx2', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'Kids Bedroom'),
  'Spare Bedroom' : Purifier('192.168.xxx.xx3', 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'Spare Bedroom'),

while True:
  now = time.localtime()
  for name in purifiers:
    p = purifiers[name]
    print(time.asctime(now), p)
    if p.available:
      if p.powered_on:
          if 'Kitchen' in p.name or (now.tm_hour >= 9 and now.tm_hour <= 21):
            # Daytime running
            if p.air_ppm >= 5:
              new_rpm = 1100 + p.air_ppm * 10
              if new_rpm > 2200:
                new_rpm = 2200
              new_rpm -= new_rpm % 10
          # Nighttime running
        except (airpurifier_miot.AirPurifierMiotException, DeviceException) as e:
          print("Failed to set", p.ip, "(" + p.name + ") due to", repr(e))

  if now.tm_hour >= 22 and now.tm_min > 1:
    print("It is after 22.01pm, exiting!")

Sure, it’s not pretty, but it does the job until support for these purifiers lands into Home Assistant. I have it running inside a cronjob which launches at 08.59am, then the script exits itself at 22.02pm. This is to prevent constant WiFi traffic at night time as I have one of these units right next to my head in my bedroom. As the script runs, it outputs a constant sequence of logging which looks like:

Sun Feb 21 00:13:15 2021 Purifier(Kitchen) available=1 powered_on=1 air_ppm=0 led_brightness=8 current_rpm=1100 filter_hours_used=87 filter_life_remaining=97%
Sun Feb 21 00:13:15 2021 Purifier(Master Bedroom) available=1 powered_on=1 air_ppm=0 led_brightness=0 current_rpm=704 filter_hours_used=80 filter_life_remaining=98%
Sun Feb 21 00:13:15 2021 Purifier(Kids Bedroom) available=1 powered_on=1 air_ppm=0 led_brightness=0 current_rpm=704 filter_hours_used=81 filter_life_remaining=98%
Sun Feb 21 00:13:15 2021 Purifier(Spare Bedroom) available=1 powered_on=1 air_ppm=0 led_brightness=0 current_rpm=704 filter_hours_used=81 filter_life_remaining=98%
Sun Feb 21 00:13:24 2021 Purifier(Kitchen) available=1 powered_on=1 air_ppm=0 led_brightness=8 current_rpm=1104 filter_hours_used=87 filter_life_remaining=97%

Something a bit worrying is what happens whenever one cooks dinner, so the hob or oven is on. My script rapidly increases RPM on a per-purifier basis if its PM2.5 sensor exceeds five μg/m3, hitting the maximum 2200 RPM from 110 PM2.5 onwards. So far, we have not failed to hit at least 50 PM2.5 in the kitchen, sometimes over 300 PM2.5, though the purifier does clear it within fifteen minutes after you stop cooking. More worrying again is that this pollution gets all over the house, the other purifiers register 30 - 40 PM2.5 in our bedrooms.

Now, I knew from general reading that this is typical in UK-Irish homes which are unusually poorly ventilated by international norms, but until now it was all kinda abstract. I hadn’t really realised what it actually meant until I saw these sensors all jump throughout the house every dinner and lunch times where we cook hot food. We do have an extractor over the hob, but it’s a cheap noisy thing which doesn’t seem to extract much. So it looks like these air purifiers might do some good there too, which was not expected before I bought them.

#air-purifiers #xiaomi

Sunday 14 February 2021: 00:08. A little over a week ago I returned to work from lunch to discover this rather unpleasant surprise:

The right half of my monitor had vanished! And it quickly transpired that the cause could not be anything else but the monitor, a 27 inch 2K Hazro HZ27WD which I bought just under ten years ago.

Never heard of Hazro? They were a British thing. Basically someone in the UK contracted some South Korean manufacturer to stick the exact same high end LG IPS panel as was then in the Apple Cinema Display and professional grade Dell monitor into a very cheap all plastic chassis with a then-novel LED backlight and knock them out for under £400, which at that time, was ludicrously cheap for a 2560x1440 IPS monitor. Needless to say they sold like hotcakes, though many died after a few years with the exact problem mine died with. On that basis, getting nearly a decade of use from it was exceptional.

Because that panel was the very highest end available a decade ago, it lasted remarkably well in terms of display quality – I didn’t get a display which beat it until the Dell XPS 13 I’m typing on now two years ago, which is not only 4K resolution, but also has slightly better colour reproduction (80% DCI-P3) than that 2011 LG panel (75% Adobe RGB). I hadn’t thus felt much need to upgrade prematurely, or indeed keep up to date with the latest in monitor technology. I was perfectly happy with that Hazro until it died.

Cue, therefore, after its confirmed death, an enormous bout of reading to choose a replacement, as the choice available is legion, and there is a surprising amount of overpriced dross in the market mainly aiming for purchase by those who haven’t done their research (this situation is very similar to mobile phones and cars, most people get “whatever’s easiest” rather than what suits their personal use case the best).

I was very definite that I wanted a 4K display at 30 inches or more, so I could use 100% text scaling and still actually read stuff (I use 200% scaling on my laptop’s 4K screen, and even then I wouldn’t want to be spending ten hours a day in front of such small text). I also wanted:

  • Full DCI-P3 colour gamut.
  • Accurately calibrated from the factory.
  • Adaptive frame sync for AMD GPUs (Freesync).
  • A VESA mount as such a big panel is a pain to place ergonomically otherwise.
  • High quality scaling from 2k to 4k with low lag for a bit of gaming.
  • An environment sensor so the display adjusts itself over time to the room like my phone does.
  • Definitely DisplayPort and HDMI inputs, ideally also USB-C for my laptop/the future.

And, of course, I didn’t want to pay too much, ideally under €500 ex VAT (€600 inc VAT) which I felt was a reasonable budget.

Given how the display of my current phone the Galaxy S10 is the best I have ever gazed upon ever in my life, period, my first thought was for an OLED monitor. Dell used to do a ‘normally sized’ one of those (i.e. not under 14 inches, not more than 40 inches) a few years ago, but they discontinued it as apparently its display wasn’t very good for its steep price. LG have announced the 32 inch 4K 32EP950 with a JOLED OLED panel, 10 bit native colour and 99% of DCI-P3 gamut coverage, but it’s not for sale yet, and when it does goes on sale, I very much doubt that it would remain in stock for long if it’s reasonably priced, as LG claim it will be. And even then, reasonably priced would be defined here as under two grand, or three times my budget.

Having ruled that out, I next looked at Dell’s monitors, and only found the 27 inch models appealing, which were too small. I then looked at the gaming monitors, but their hefty prices and things like having fans put me off. I looked at the ultrawides, but I was unwilling to afford the graphics card I’d need to power one of those, and besides to be honest, most of an ultrawide monitor is outside your field of vision in any case, so my opinion it’s a waste of money for productivity uses (plus, many of them have colour reproduction issues caused by the curve in the monitor). As my shortlist of possible candidates built up, I began to notice that almost all of them were well over a grand, so the featureset I was looking for obviously did not come cheap.

The first monitor I saw to get me excited enough to start reading its user manual was the LG 32UN880. It has:

  • 31.5 inch 4K IPS display with 95% of DCI-P3 colour gamut. LG panel.
  • AMD Freesync with nVidia compatibility.
  • USB-C input support.
  • Comes with desk clamp stand and has complete motion, including swivel.
  • Delivered for under €700 inc VAT, if you can find it in stock (none in the UK nor Germany at that time).

It seemed to tick most of my boxes, but there was a real paucity of professional reviews out there (just one!), and almost no information nor experiences of people playing games on it. I didn’t know, for example, what input lag it might have when scaling 2K content to its 4K panel, and the lack of evaluation apart from short user reviews along the lines of ‘I love this monitor’ weren’t ideal.

After lots more searching, I eventually stumbled on another monitor exciting enough for me to read its user manual, which was the BenQ EW3280U:

  • 32 inch 4K IPS display with 95% of DCI-P3 colour gamut. AU Optronics panel.
  • AMD Freesync with nVidia compatibility.
  • USB-C input support.
  • VESA mountable.
  • Better HDR than the LG, though still not a patch on the much brighter local dimming backlight models costing twice as much.
  • An environment sensor, so it can automatically adjust its brightness and colour to the current surroundings.
  • A subwoofer (very unusual!) and a remote control (even more unusual).
  • Delivered for under €700 inc VAT, from multiple suppliers across Europe including Amazon.

Unlike the LG, the EW3280U has plenty of in depth reviews, including many hours of YouTube video reviews and deep testing of the monitor in almost every conceivable use case. I read and watched a lot of those reviews, and felt I knew what I was buying, so I pulled the trigger from Amazon Germany to avoid the Brexit tax. It arrived seven days later:

After it arrived and I’d used it for a day I realised it was too high up, so I ordered a desk clamped mount the ErGear EGSS6-E from Amazon UK for £31 (which is under the Brexit tax threshold) which claims 12kg monitor weight support (the BenQ is 8kg). Despite it being pretty much the cheapest VESA mount on Amazon able to handle bigger monitors, I found it well made and it works well, no problems with the size nor weight of this monitor at all, and you can move the monitor up, down, side to side, in and out, and tilt/rotate it:

So what’s this monitor like to look at? This monitor is the best monitor I’ve ever tested on http://www.lagom.nl/lcd-test/, out of the box perfect across all the tests. Out of the box colour reproduction and gamma is perfect, without adjustment, at all brightnesses, in so far as I can test without professional equipment. No dead nor bright pixels. There is surprisingly little colour shift from your head position on the IPS panel, display technologies have improved greatly in the past ten years since I bought my previous monitor. This is a HDR and wide gamut monitor, but my five year old graphics card can’t do better than deliver a 10 bit sRGB signal, so I can’t comment on any of that stuff unfortunately. Sound quality from the monitor is unusually good due to its subwoofer, it’s actually usable for EDM though the lowest bass tones like in dubstep don’t render. You can have it optionally change its colour and brightness rendering to match your current environment e.g. turning on the room light (which is warm white) has a noticeable effect on the monitor, and that sensor can be disabled. The choice of 32 inch over 27 inch for 4k resolution was definitely the right call, I’m at 100% DPI zoom in Windows and it’s just right. All in all, for the money, I am pleased.

My only concern is that I swear the backlight flickers slightly sometimes, mainly around the top left. It isn’t always there, indeed most of the time it isn’t there. When it appears, or at least I think it appears, I’ve tried recording it with my phone at 60 fps and my phone yields me nothing – though, it may just be auto-filtering out the changes in the brightness to create a more consistent video recording. This BenQ model is supposed to come with DC rather than PWM dimming so it should not flicker. If I knock up the brightness, I swear I still see it sometimes but not always, so it’s definitely not caused by PWM.

There’s a chance it could be me, or the edge backlighting LEDs in the top left of the monitor have a fault in their circuit. Sending it back to Germany for a replacement would be irritating, but let’s see how this coming week fares, it may just be new monitor teething as the electricals settle in or something.

#monitors #vesa-mounts

Sunday 31 January 2021: 02:09. This is the ATorch DL24, a cheap Chinese battery load tester available from Aliexpress and others for about US$30. It is quite famous amongst internet battery testing enthusiasts, because it offers a comprehensive feature suite for one twentieth the cost of typical battery testers. It will do:

  • Constant current testing (Amps)
  • Constant resistance testing (Ohms)
  • Constant power testing (Watts)
  • Constant voltage testing (Volts)

You can program it to stop if voltage drops below a minimum, so you don’t wreck your batteries, or if your battery gets too hot, to prevent explosions. Voltage measurement is now by separate wires in this updated model, so there are no longer problems with voltage droop during high current test loads as found in previous models. It is festooned with connections for ease of testing: Mini, Micro and Type-C USB, plus old fashioned crocodile clips. It can transmit its measurements by Bluetooth, so you can record graphs of battery behaviour over time, just like https://lygte-info.dk/ has. It also has a very nice colour display panel which shows present measurements, and summary measurements. For the money, I came away impressed, though I would hesitate to push more than half its claimed maximum wattage (150W) through it (you can find some entertaining Russian videos on YouTube with this exact model not coping well with 150W of power going through it).

For my purposes of testing my next generation lithium-ion USB rechargeable AA and AAA batteries which I covered a few posts ago, it is absolutely fine as I won’t be testing more than one ampere at 1.5v, which is of course 1.5W, or 100x lower than the claimed maximum. Indeed, during my testing, the heat generated was so low than the fan didn’t even turn on.

Anyway I did not go nuts testing these batteries, so I don’t have a lot of samples. This is because I could only leave it run after the kids went to bed, but I had to tidy it all up before I went to bed, and this limited total testing time possible to a few hours, which necessitated higher load currents under test. So my results are rather for higher currents than is fair to these batteries, as lower current draws will show more battery capacity, by definition. Still my results essentially match those of lygte-info.dk, so I can confirm that my AA ZNTER batteries are absolutely identical to that specific Blackube branded USB battery:

After one month since chargingAfter one day since chargingBlackube measured by lygte (source)
Load current:1A1A0.4A1A0.5A
Capacity (claimed: 1.7Ah):1.357Ah1.471Ah1.556Ah1.591Ah1.629Ah

The higher results for lygte I think are because he has much thicker, higher quality, testing wires than I do. At 0.4A, the gap between my results and his drop markedly, as you’d expect from thinner wires, and if I had had the time to run a 0.1A test, I’d expect mine to be very close to his.

An interesting additional data point is the approx 8.4% capacity loss over a month since charging. I’d love to know if this is due to the lithium cell, in which case the loss will be inverse compounding, or due to the microprocessor handling the voltage conversion which would draw down the battery linearly. If it’s the former, the battery will quickly reduce itself to half capacity, but very slowly reduce thereafter, whereas if it’s the former, these batteries will be flat within twelve months, which I would imagine would do no favours to the longevity of the lithium cell.

I also tested some of the AAA model, which unlike the AA model does not claim comparable power storage to NiMH nor alkaline AAA batteries:

After one month since chargingAfter one day since charging
Load current:1A1A
Capacity (claimed: 0.6Ah):0.489Ah0.525Ah

The self discharge rate here is similar to the AA models at approx 8% per month, and the overstatement of claimed capacity relatve to what I measured is almost identical as to the AA model, at about 15%.

As I concluded in my last post on these batteries, I don’t think these USB rechargeable batteries are quite ready for prime time yet. Firstly, the Chinese really ought to stop adding +12% to all battery capacity claims. It does them no favour longer term. Secondly, these really need to last as long as NiMH batteries do, despite delivering a sustained 1.5v throughout, if people are going to be willing to spend multiple times more to buy them over NiMH batteries. One obvious fix to this is to not deliver 1.5v, but instead to deliver say 1.25v (ideally it would be switchable), and that would deliver a +20% gain in runtime. If they can squeeze a further 20% capacity into the same size profile, now you have a better AA battery in every way, apart from cost.

That’s for the AA size which already has comparable total power (Wh) to NiMH batteries. For the AAA model however, the gap is far larger, currently by about 2x for the higher quality NiMH AAA sized batteries. I suspect this is because the voltage converter and charge electronics takes up much more space, relatively speaking. Given how small those batteries are, how fixed size the converter electronics must be, the AAA size I think will never catch up. Equally, going the other way, C-size and D-size batteries have larger volume where the lithium cell’s much higher energy density really could shine, so for the bigger batteries I can see this next generation technology will become dominant quickly. Not least because few have NiMH C-size and D-size rechargeable batteries as they require a separate charger, and these USB recharged batteries are therefore much more compelling, including containing more power and being much lighter.


Saturday 23 January 2021: 01:56. Since my last post, Christmas came and Christmas went. We had supposed to have been in the United States to visit Megan’s family, but Covid made that not possible. In order to be able to safely visit my father on Christmas Day, we imposed upon ourselves self quarantine for the preceding five days – we didn’t go near anybody else. Thus we were able to stay the day with my father safely, and indeed none of us caught Covid, so it worked out well. Meanwhile, most of Ireland seemed to party like it was 1999, and the death rate promptly rose twenty fold starting from about a week after Christmas and kept getting worse. Last week, I believe, around ten percent of total excess deaths in Ireland due to Covid since all this began a year ago occurred, which is quite the hangover from all that partying. We dodged a bullet.

Since then, obviously enough it’s been another total lockdown, can’t travel more than 5km, which is a total arse for entertaining my children at the weekend when I have to find something for us to do not at home. Last weekend I took them up a forested mountain and for a long bike ride (for them). Almost certainly I’ll be doing exactly the same again tomorrow and Sunday, because there is precious little alternative within a 5km radius.

I got my battery load tester from China, so I’ll be able to post empirical testing of those lithium ion AA and AAA batteries I was talking about last post soon. The battery load tester is pretty impressive for €30, sure it’s cobbled together from raw parts, but in terms of capability and UX I came away impressed. I’ll talk about all that when I make my post about those batteries.

As a quick update to #mintos, here are the last two months of earnings:

MonthAnnualised return for each month, totalNon-earning capitalAnnualised return for earning capital
August 202010.36%9.7%11.47%
September 202011.09%9.6%12.26%
October 202010.86%9.5%12%
November 20205.51%9.7%6.1%
December 202011.29%11.4%12.74%

The hit in November was due to me getting out of Mogo, in which I had become 92% invested as I mentioned in my last post, which was causing me concern as according to https://explorep2p.com/mintos-lender-ratings/, Mogo are in trouble. The hit is due to the 0.9% fee Mintos charge for you to sell your holdings, plus for the loans with particularly low interest rate with a 610 safety rating I had to add a 0.6% discount to get them to shift. Shift them I did though, and I am now merely 31% invested in Mogo, and mostly the higher paying loans at that. I’m comfortable with it being around a quarter of my total.

I mainly swapped out for DelphinGroup, 810 rating and also highly rated by explorep2p. They had been paying 14% per annum, but just recently they axed that back to 11% or even lower, obviously they feel they no longer need to pay as much for capital. Rather than go back into more Mogo, I decided to diversify into a 710 rating loan originator, IuteCredit which explorep2p think so highly of. I’m currently at 54% DelphinGroup, 31% Mogo, 15% IuteCredit with loan early paybacks and interest being directed into more IuteCredit, so its relative proportion will keep growing.

Finally, as you’ll note non-earning capital has been rising. This isn’t what it seems, I’ve actually been pulling money out of Mintos altogether, so the proportion locked up due to Capital Service defaulting appears to rise. Mintos supposedly did a repayment deal with Capital Service, in theory they’ll be repaying the whole sum plus interest within three years. All I’ve noticed to date is that what they owe me has actually been slightly increasing from the interest Mintos are adding onto the debt i.e. they don’t appear to be repaying a thing yet, or if they are, it’s less than the interest Mintos are adding. I’d imagine there won’t be progress on this until at least the summer, when in theory we’ll all be vaccinated and Capital Service can start bringing in money again. Assuming they haven’t gone bust before then, of course.

Otherwise all is well here. Work, sleep, childcare, the treadmill keeps turning.

#mintos #p2p-lending

Monday 7 December 2020: 01:50. Long time readers of this virtual diary will remember when I occasionally spot a new household technology a few years before it becomes mainstream, and review it here. One memorable such review was that of LED filament bulbs all the way back in 2014 where I bought a 3w unit for my hallway for €20. It put out a lot of nice warm light considering it consumes 3w, and I can tell you right now that it’s still working fine after six years despite being almost always turned on. Its only negatives are that it flickers somewhat (those earlier generation filament bulbs didn’t have a voltage rectifier, so mine pulses at 25 Hz, and it’s noticeable) and that the light throw off it is not entirely even. Nowadays LED filament bulbs are mainstream, you can buy them in any hardware store or off Amazon, indeed Philips makes a range of them. The new ones don’t flicker, have a more even light throw, and cost about €6 in singles, much less in bulk.

(Speaking of LED bulbs, I recently bought some 1600 lumen Philips LED bulbs, they provide a fabulous, even, illumination for about €3 each. When they are turned on, very impressive how bright the room is)

Anyway what I’ve got for you today are what I think will eventually become the next generation of rechargeable batteries. They have identical form factor to standard batteries so you can stick them in anywhere. Unlike NiMH rechargeables which have a cell voltage of 1.2v (which makes things noticeably dimmer than if you use 1.5v non-rechargeables), these provide a constant 1.5v. By constant, I really do mean constant: they output exactly 1.5v from full until empty, whereupon they then go to 0v. This constant voltage has some big advantages, mainly that things remain bright and never dim over time, like you especially get with any other kind of battery.

The way these batteries work is that they have a lithium ion cell, which like all lithium ion cells outputs about 3v. A very small embedded computer runs a DC-DC voltage converter to downgrade the current voltage (about 3.6v full, down to about 2.5v empty) to 1.5v. This is why the voltage is completely constant until the battery is empty. Essentially, the little embedded computer ‘simulates’ a real AA battery.

Furthermore, because each battery is a small computer, you can recharge them using standard micro USB. The embedded computer takes in the 5v from the USB and charges the lithium ion cell, changing the colour of a little LED from red to green once full. You can only really get these next generation rechargeable batteries in the West from Amazon, where they are sold under the Blackube branding, amongst others. As you will see, they cost about €40 for four AA batteries on Amazon, which is very pricey.

Blackube and all the others are an OEM rebrand of Chinese manufactured batteries. One of the biggest of the original manufacturers is ZNTER, and you can acquire what appear to be the exact same batteries as the vendors on Amazon are selling directly from China for about half the price, if you are willing to wait six weeks or so. Mine arrived last week, and here is what they look like:

The first thing you notice is how light they are, much lighter than alkalines, probably about that of zinc chloride batteries. The second thing you notice is the micro USB slot, which is on the top for the AA batteries, and on the top’s side for the AAA batteries. Other than those two differences, they look every bit exactly like AA or AAA batteries. Plugging them into something, the voltmeter reads a rock steady 1.55v at 200 mA discharge, and 1.53v at 500 mA discharge. Here are the characteristics for the AA size as according to various sources:

Alkaline non-rechargeable (source)NiMH rechargeable (source)ZNTER manufacturer claim (source)Blackube claim (source)Blackube measured by lygte (source)
Capacity @ 0.1A discharge2.5Ah2.0Ah1.7Ah1.7Ah1.65Ah
Median voltage @ 0.1A discharge1.2v1.27v1.5v1.5v1.5v
Power @ 0.1A discharge3.0Wh2.5Wh2.59Wh2.55Wh2.47Wh
Runtime @ 0.1A discharge25h21h17h (est)17h (est)17h
Charge timen/a900m90m120m120m
Max charge cyclesn/a100030001000n/a
Capacity @ 1.0A discharge1.21Ah2.0Ahn/an/a1.59Ah
Median voltage @ 1.0A discharge1.05v1.25vn/an/a1.5v
Power @ 1.0A discharge1.3Wh2.54Whn/an/a2.36Wh
Runtime @ 1.0A discharge1.2h2.0hn/an/a1.6h

As usual, you should not trust Chinese manufacturer claims. They are way off reality. Even the Blackube claims are slightly short of measured reality. As you can see, if my ZNTER batteries actually match the Blackube batteries, they are competitive with Alkaline and NiMH at low current draws, though with 20%-33% shorter runtime in exchange for maximum brightness. At higher current draws, these batteries remain competitive with NiMH, and blow away Alkaline batteries which do not cope well with high current draws. They still retain a 20% runtime deficit, but again you get 1.5v throughout.

Now I don’t know if the batteries I bought are in fact identical to the Blackube ones yet. I don’t have the equipment here to test them (the constant voltage confuses all my NiMH assuming equipment), so all I can report right now is anecdotal experience.

Certainly they do work. I have a set in some Christmas lights currently. You’d typically get about five days if you put NiMH into them, but the lights get dull quite quickly. Putting these ZNTER batteries into them they are very bright throughout, but for maybe a little less than three days. One very curious thing is what happens when one of the three batteries in those lights runs out – it ‘vampires’ 0.5v from the remaining two batteries, so 2.5v reaches the lights instead of the expected 3.0v. This means that in practice, the lights are very bright until they go quite dim, as 4.5v drops suddenly to 2.0v. You then don’t know which of the three batteries is empty, so you must recharge all three. Charging does take under two hours, but they draw 0.5A each at the beginning of the charge, not the 0.4A reported by lygte, and definitely not the 0.35A claimed by Blackube. As they approach full, they taper back current heavily, and they never get more than mildly warm around the USB socket (where the IC is).

From this anecdata, I’m thinking that despite the identical outward appearance of these ZNTER batteries to the Blackube ones, they are probably not the same, and not in favour of the ZNTER ones. They did cost half as much, and I definitely don’t think they are half as good, so on that basis I’ve done well.

As I really want to find out the true characteristics of these batteries before I put them into regular use (particularly if they might catch fire under load), I have ordered a cheap programmable load tester from China. Once it arrives I’ll be able to drive a 0.1A and 1.0A load upon them, and see how well they perform.

Until then, I will conclude by saying that these next generation batteries aren’t quite there yet as an Alkaline or NiMH battery killer. But they continue to experience rapid improvement year on year, even just two years ago claimed capacities were considerably less, and as miniaturisation of the components allowing more lithium in the same form factor progresses, that will continue. I would not be surprised if three years from now that this kind of rechargeable battery won’t be superior to NiMH on every metric except price i.e. they will last just as long in runtime at both low and high currents, but output 1.5v throughout, and have thrice longer recharge lifespans. Right now four AA batteries can be got from China for about €13, about double what Amazon charges for their very good NiMH rechargeables. I certainly can see that gap closing to under 50%.

I don’t foresee these batteries ever beating alkaline for long lived low power applications. If you need to power a wall clock for years, expensive alkaline batteries can deliver more than 3.5Wh over five years or more. That cannot be beaten by anything containing an embedded computer which needs to draw some power, no matter how little, never mind the fact that rechargeable chemistry cannot avoid much higher self discharge rates than non-rechargeable. However for applications with very high current draw where even high quality NiMH experiences considerable voltage sag, this kind of rechargeable battery will dominate as lithium ion can deliver vastly more watts than NiMH ever could.

I’ll make another post here with empirical testing of my new batteries when my test equipment arrives. If that doesn’t happen until after Christmas, Merry Christmas!


Sunday 8 November 2020: 23:52. In yesterday’s entry on my summer holiday in Tenerife, I mentioned that I was a little surprised to realise that I hadn’t posted a thing on #mintos since April, so here are my annualised monthly earnings since then:

MonthAnnualised return for each month, totalNon-earning capitalAnnualised return for earning capital
March 202074.02%0%unaffected
April 2020-55.45%0%unaffected
May 202010.44%0%unaffected
June 202012.10%0%unaffected
July 20203.81%9.8%4.22%
August 202010.36%9.7%11.47%
September 202011.09%9.6%12.26%
October 202010.86%9.5%12%

As described in earlier posts, I morally refuse to invest in short term or payday loans, despite that those pay much better interest rates and have much lower risk, so the above returns are for long term, safest possible (>= A or >= 8 rating), subprime debt on Mintos.

As mentioned in my April entry, in March I scalped all the people fleeing Mintos and made an enormous profit. I then used that profit to exchange all my riskier loans for very safe ones backed by assets (mostly Mogo, the Eastern European car lending giant). As everybody was fleeing Mintos at the time, the spread was only a few percent between riskiest and safest, so the loss in April was less than the profit in March. In short, I rebalanced my investments into safety across March-April, and made a slight profit.

Unfortunately, as the column marked ‘Non-earning capital’ would suggest, I was not out of the woods yet. In July one of the loan orginators I had invested in, Capital Service, went bust which was something I had anticipated in my last post. I hadn’t seen that particular loan originator going bust coming, to be honest, they had been relatively highly ranked but what happened was that their customers paid their loan installments in shops weekly when they bought food etc. Normally that’s a great, reliable, revenue stream but thanks to Covid, all that literally vanished overnight. On top of all that, the Polish government gave a loan payment holiday to everybody in the country, and naturally most of the kind of less wealthy client which Capital Service had (i.e. ones who paid their installments in shops, not by standing order from a bank account) took the holiday. Hence, bye bye lender.

About 10% of my investment was tied up in Capital Service, and I hadn’t been able to get out of it because trading their loans had been suspended very early on, though interest payments continued. We had been expecting the Polish government to bail them out, to be honest, as much of the ruling political party’s support are the sort to have loans from Capital Service, so I hadn’t been particularly worried up until suddenly they announced they weren’t going to pay interest on my loans with them any more. So that portion immediately started non-performing which has hurt monthly returns ever since. If you subtract out the non-earning capital, returns are about normal, despite all the covid lockdown disruption and ever rising unemployment rates in Eastern Europe. I expect to get most of the investment in Capital Service back eventually after wind up, but it’ll be many months out, and I won’t get all of it. Still, it could be far, far worse.

Capital Service no longer paying interest was not the cause of the poor July return, however. The hefty dip in July was because of Mogo car loans rebuying almost all the loans I had with them. Mogo had, for a short while right at the start of covid, been selling loans at 16% but no longer was, so I had been hoovering those up on the secondary market when people sold them. Alas, I was paying a small premium to grab them on the basis it would pay out over the many years of holding them, and they were amongst the safest loan originators on Mintos. And, of course, what happened then was that Mogo repaid in full all loans above 12%, which they are allowed to do at any time on Mintos, and indeed this is one of the big ways you can lose money easily on Mintos (their website prints a big warning when you buy loans off others at a premium). Mogo did this because they knew full well that we’d all buy back all those exact same loans at 11.5%, and they’d no longer need to pay out 16%. If I hadn’t paid a premium to buy those loans, I’d not have lost money, I’d just have earned less in the future than I expected. But I had paid those premiums, so I took a hit. Still, this also could have been far, far worse, I lost about three weeks’ earnings.

Up until end of October I had become 92% invested in Mogo. This was a bit uncomfortable, so many eggs in one basket, but there isn’t much choice for high yielding maximum safety loans on Mintos. Then, Mintos changed how they rank riskiness of loan originators to rank each division of Mogo individually and bam!, now a quarter of my Mogo loans are no longer the very safest loan originators. I took that as an opportunity to diversify out, selling my 6 and 7 rated Mogo loans with lower interest rate in favour of 8 rated higher interest rate non-Mogo loans. As everybody else is doing exactly the same, progress is dribs and drabs, but it’s getting there, and I’m in no rush as Mogo has a group guarantee. I expect by the end of this month to become about 50% invested in Mogo, 45% invested in DelfinGroup, and the rest a smattering of risky near-finished legacy loans which ought to get fully repaid due to buyback guarantee before Christmas (one of my scalp strategies was to buy loans with less than a month of term remaining at hefty discounts by fleeing Mintos investors. The vast bulk paid out, earning me around 100-200% annualised, but this long tail is for a few where the borrower extended the loan repayment by a month six times, and six times is the maximum. So in December the loan originator must repay the capital under the buyback guarantee. Even then, I will have easily made over 10% annualised on this long tail).

Periodically reading the blog commentary on all of this https://explorep2p.com/mintos-lender-ratings/ has been interesting. They were once keen on Capital Service, and indeed, if not for face to face covid shutdowns it did look like a good sustainable lending model for reaching less wealthy up-and-coming Eastern Europeans. Lots of people like myself have thus ended up with capital trapped in the Capital Service unwind. Such is life and risk – anybody who had read my guide on Mintos here would know all these is subprime debt, it’s risky.

Interestingly, ExploreP2P’s latest loan originator rankings are similar to Mintos’, except they dislike Mikro Kapital and AgroCredit a lot, whilst Mintos doesn’t care for their favourites luteCredit, Creditstar and Wowwo. As I always did before all this, I choose the common subset of the two rankings, so Placet Group, DelfinGroup and Mogo are the only loan originators with maximum safety ranking on both lists. There I shall stay until the pandemic clears, and hope for the best in the subsequent economic rebound.

You may be reading all of the above and thinking that I did not do well. Yes I did lose a good chunk of my Mintos earnings preceding this, about 4% of capital invested, leaving me with a +8% gain. And I wouldn’t be surprised to lose 10-20% of my Capital Service stake, which would be a further 1-2% of capital invested. +6% return in a year might look rather poor compared to the +12% it could have been.

But you must remember I’m not investing for returns, I am investing to negate inflation on a larger cash pile. On that basis, I am currently almost bang on 1% return, which was almost exactly the average rate of inflation in 2019 in Ireland. Obviously, after income tax on the Mintos earnings, that’s more like 0.5%, so I lost 0.5% of my cash’s purchasing power last year. However, thanks to covid, inflation will be negative in 2020 in Ireland, so as much as covid has hurt my Mintos earnings, there is a corresponding hit to inflation as well. They’ll easily cancel each other out and then some, so I think my 0.5% loss last year should get undone this year.

You should also bear in mind all the calamnities I successfully avoided. I had been heavily invested into ExpressCredit loans from Botswana. I divested completely last year due to getting scared by an ExploreP2P report, in which I lost a bit of income due to inexperience. Had I remained invested in them, I’d have lost everything, as they went down quick and early from covid. I also successfully got myself free of Finko, despite at one time having half my money in them due to me not watching the auto investment bot closely enough. That cost me money to get clear of them, and again, had I not done so I’d have lost half my money since. Lots of loan originators on Mintos went bust, I successfully predicted in advance and avoided almost all of the carnage. I just got caught out by Capital Service because I didn’t expect the Polish government to footgun its financial industry like it did by giving loan repayment holidays to everybody, and then not supporting the lenders. Oh well, you can’t get everything right all the time.

Anyway, I expect more loan originators on Mintos to go bust in the next six months. With a bit of luck, it won’t be Mogo nor DelfinGroup. Thereafter we should economically rebound as lockdowns stop and vaccines start, and I might then start thinking about taking on a bit more risk on Mintos again, especially if Mogo or DelfinGroup repay in full all the higher interest loans they currently are selling in order to drive down their debt costs in a less risky market.

#mintos #p2p-lending

Sunday 8 November 2020: 00:08. Three months have passed since my last update here, little to report except on the travel we packed in during late summer between covid lockdowns. We had actually booked Tenerife for August well before covid, and despite that everybody was freaking out, I decided that we ought to go anyway, on the basis that a second lockdown was surely coming after the schools returned (and oh how right I was on that!), so best to get your holidays in now, as there would be no further escape for long thereafter. As we were renting our own house there and would not be expected to come close to other humans due to mostly travelling around the island and never going indoors, I felt it no riskier than at home. On this I was correct – even in the completely outdoors water parks which had by far the most close contact with others, the water is both chlorinated AND is sea water, and thus is highly salty.

I had never been to the Canary Islands before. They have a reputation of being a tourist trap. I am glad to report that whilst they do have tourist trap bits, if you avoid those, Tenerife is a world class destination experience. Such enormous variety of landscape and terrain on a single island around which you can drive the circumference in under two hours on a very good motorway. You have rainforest, lunar/martian landscapes, volcanic complete with sulphur smell, black sand beaches, cliffs, valleys, plant and animal life, some of the most amazing sights and scenery I’ve ever seen. There is lots of history, old fortifications, naval battles, pirates, old world towns and architecture, many catherdrals, and some very excellent museums. Obviously the food and drink are superb as with anywhere in Spain, and cheaper than in Ireland. And it’s warm, but surprisingly not that humid as the very tall mountain mixes down dry air from high up, so it has probably the best climate of anywhere I’ve ever been – dry without being too dry, warm without being too warm like summers on the Spanish mainland. Something that you really notice compared to Europe is just how pristinely clean everything is at least if you avoid the south – no plastic on the beaches, no pollution which you immediately notice when breathing, and how far across the sea to the other islands you can view. Yet, at the same time, if you want nightlife you have it, if you want your drive-through McDonalds, it’s there. Internet connectivity was generally fabulous, 4G on my phone throughout, and very high performance at that, much better than 4G in Ireland. It has all the comforts of Continental Europe, a much better climate, and a vast choice of stuff to do. I came away from there seriously considering relocating there permanently. That’s how nice it is out there.

Now, in the end, we weren’t there during normal times. We flew out and back on an almost empty plane (which was very pleasant as a result). The English speaking south was desolated, but things were busy enough in the Spanish speaking north where we were as Spanish mainlanders could holiday there without quarantine. The island itself was thus nicely unfull overall, which made it very pleasant to travel around indeed, yet where we were staying all the restaurants and facilities were open and quite busy. I am extremely sure that I would not like it anything like as much during normal non-covid times. Something like twenty-five million people visit per year, and high season is known for the island being absolutely jammers full which does not sound pleasant at all. So our experience was not typical, and permanently relocating there would not be to what we experienced.

And besides, ultimately, it’s a four hour flight from Northern Europe. Getting to and from it is therefore a pain, and not at all guaranteed to remain financially viable in the medium term as climate change gets cracked down upon. Tenerife would be a great place to relocate to if you were young and mobile (assuming you had work doable remotely), or were retired and no longer dependent on needing to find employment if your employment unexpectedly was terminated. As much as living in Northern Europe is much less pleasant, there are very good reasons why all the young with skills and motivation have been migrating from south to north for decades now, with no change in that migration pattern likely soon. The North is where the good jobs are, and it’s where we already live.

After much reseach, I decided on staying for our nine days in Radazul, a commuter town on the southern coast just outside Santa Cruz, the capital. I chose it because if you stay on the northern coast, the climate is far more humid and cloudy, whereas on the southern coast, you get blazing sun and clear skies most of the time. There is thus far less vegetation in the south. The entire of the north of the island is Spanish speaking, and apart from the extreme mountainness, you would swear you were in some part of the Spanish mainland in terms of look and feel. Here is the view from the house we rented:

We looked at that each morning as we drank espresso waking up. It was very pleasant.

One of the earlier things we did was to scale the mountain in the middle of Tenerife, which takes you up 13,000 feet or so. On the way up the terrain completely changes multiple times – forest, shrubs, desert, then actual elemental sulphur emitted recently from the volcano:

Yes, it actually smells like hell up there thanks to the sulphur. The kids seemed most impressed with the stinkiness. And obviously the view from 13,000 feet up is quite something, as is the lack of oxygen which was also a first for the kids. They handled the steep cable car ride well I thought.

On the large plateau on which the mountaintop sits – a mere 8,500 feet or so up – there is some amazing terrain. Here is me walking in it, it wasn’t sand by the way, it was more like a kind of pumice gravel, very odd consistency, indeed it seemed very much like you were on a Martian surface:

Clara was, as you can see, quite taken with the landscape and was attempting to capture it in her notebook.

We went lots of other places with many beautiful sights. Fabulous variety in Tenerife. The only place where we failed to get good photos that I would have wanted to show here was of the rainforest north of Santa Cruz. It is full of laurel trees and this forest has been here since the time of the dinosaurs, indeed this is what dinosaurs thought was typical, and it doesn’t exist worldwide outside a few remote islands any more. If you search Google for images of ‘Anaga Mountains’, you’ll see what we failed to capture. The reason for our failure, incidentally, is that they were closed to walkers when we visited due to fire risk, so we had to just drive through them.

The final place with photos I want to share here was Masca Valley. We very nearly didn’t go here, it was right at the end of our holiday and hadn’t been high on our priority list, partially because it is literally the furthest and most difficult place to get to from Santa Cruz, and partially because it’s a mecca for hikers, and we didn’t think it suitable for the kids. We mainly went there because we thought our visit would be incomplete without going. And boy were we right!

Yeah this place is somewhere very special indeed. The kids, despite their young ages, had been wowed on a number of occasions during the trip. You’d have thought they would have been all wowed out. Then they came here. Stunned silence followed for quite some minutes as they gawped around them attempting to take it all in. The last time I saw them do that was their first visit to Yosemite Valley. Yes, Masca Valley is that impressive.

The town in the centre itself is built up a steep slope. We had lots of fun going down into the town on perilously steep and slippery cobble paths. You can actually see the town stretch along the ridge in the photos to that middle peak, we walked all the way along. The entire town is surrounded by very steep mountains, and the town itself couldn’t be cuter looking, it’s hard to believe it wasn’t intentionally designed to look incredibly pretty (the actual story of why it looks as it does is enormously depressing and mainly due to horrible sustained poverty, discrimination, and Christian missionaries).

Something these pictures completely fail to capture is what is going on over your head in Masca Valley. If you look up, there is this maelstrom twisting and twirling above you, occasionally splattering you with little bits of drizzle. It is the humid air from up north blowing up over the mountain where it collides with the dry, desert air of the south. Clouds carried up the north side evaporate when they hit the south, but they don’t do so instantly. Rather, it is like a flux, a never repeating plasma of vortices shifting and twirling against each other. It is extremely hard to not just gawp at it for twenty minutes in silence, it is completely mesmerising.

On the way out of the valley I attempted to capture that maelstrom from up high. I completely and totally failed, but at least you get the idea:

You can see all that cloud from the north on the right, and the dry clear air from the south on the left.

Finally, Tenerife has an abundance of amazing man made sights as well, including a world class children’s science museum as good as any in San Francisco. Here is a picture of the Black Madonna in her cathedral in Candelaria, and indeed myself and Henry looking at her:

In hindsight we should have taken a lot more pictures of the capital Santa Cruz, which is a lovely old world feel Spanish town, yet also cosmopolitan (we ate very nicely there, despite covid, and extensively wandered by foot its streets). We also should have taken more pictures of Garachico, which is not much changed since the 17th century due to getting partially wiped out by a lava river pouring through it. It had been the capital of the island, the wealthiest part, so it was full of state of the art for its time opulance most of which was left unchanged after the citizens rapidly relocated elsewhere. It is thus chock full of heritage and history, including a monastery and convent still operating since back then unchanged in all these centuries.

You will probably note that I barely mention in any of the above the Tenerife which 95% of those visiting think is Tenerife, which is the southern English speaking part. Don’t get me wrong, we did spend about a quarter of our time there mainly visiting the theme and aqua parks, which are all world class, as good as any in Disneyworld Florida etc. We also stopped off there for an afternoon to wander around, and absolutely can I see the attraction: it is heavily overdeveloped, but that also means everything is within walking distance or taxi ride for the most part, and that means you can spend most of your time there drunk and incapable of driving and that’s not a problem. Waking up in your five thousand room hotel with a hangover is fine when five minutes walk away you are on golden sand beaches (the golden sand is imported so tourists get what they expect, the natural stuff is black, and apart from being very hot in the sun it’s very much superior to golden sand). When we were there it only had English old people in it, they still kept the bars busy, mostly whinging about Brexit and the NHS. They were friendly, but very much the kind of English who annoy everybody else in Europe. If it were running at full tilt, I suspect it would be lots and lots more of the same, only a bit younger, and with lots more children as Tenerife is the most favoured destination for those with younger families.

For a cheap holiday break away, you can get a room in those five thousand room hotels for maybe €200/week, flights might be another €200, spending money maybe €500 and you’ll be done for under a grand. Try achieving the same with a week in Killarney, for example, despite that you don’t need to fly there. Last time I tried I spent two grand, because Ireland is very unreasonably priced compared to Tenerife south. That’s why so many Irish flock to the Canary Islands instead of holidaying locally, Irish tourist destinations are geared for richer folk than the Irish .

Now, I did spent a lot more than two grand on our nine days in Tenerife. The house, which was twice the size of our own in Ireland, took a good chunk of it. As did all the activities out each day. All in all, excluding flights, it was similarly priced to our Christmas in California year before last, so our rate of ‘cash burn’ was quite similar, though we did do more expensive activities per day in Tenerife than we did in California, during which many days were spent scouting out our wedding, and thus were mainly driving and not spending money. Obviously, both were well under half the cost of our two weeks in Disneyworld Florida nearly three years ago, which was hideously expensive, but also unforgettable.

It’s now 2.25am, and I’m very tired so I’ll stop for now. Next few days I want to write another post updating where things are at with my Mintos peer-to-peer lending investments, as I just recently shifted allocation once again due to new news. See you then!


Thursday 13 August 2020: 01:24. The keener readers amongst you may have noticed that about two weeks ago on Monday 27th July, this website vanished! It only reappeared Tuesday 11th August, a downtime of some two weeks!

This is certainly not the first time that nedprod.com has suffered outage. In the approx ~22 years that this website has been here, there have been multiple uptime calamnities, some my fault, some bad luck, and some malfeasance of the website hosting provider. However, this is the first time that I’ve experienced a catastrophic hardware failure on a rented server – it was working fine, I rebooted it for the first time in 485 days, it never restarted. All data on that server was lost.

This partially explains how long it took me to restore this website: whilst all irreplaceable data such as email was safely backed up, and none of that was lost, I did lose all my replaceable data, where ‘replaceable’ is defined as ‘all the stuff repeatable using Niall’s extremely limited free time’, which back when I first took that decision of not backing up everything to home, assumed pre-children free time availability levels.

My first priority was email; email receipt was restored onto a new, temporary, server by Thursday 30th July. But I couldn’t reliably send email until ~2am on Sunday 9th August, with gmail having had to suffice in between. Then followed a process of restoring my various websites, until I had restored enough to use my fancy hand written Javascript post editor in which I am writing this now. Even still, I must still manually initiate rebuild of the website, because the docker plugin which I had written to do that has been completely lost.

Which brings me to the point of this post: irreplaceable data is obviously the most important data of all. My automated backups worked a treat on those. But I hadn’t really considered deeply, until now, just how many hours of my time had been invested into my public server. As a conservative estimate, it’s many hundreds of hours. Normally, when I transition server providers, I take a complete copy of the preceding server onto the new server. Then all the custom scripting and tweaks etc from the preceding servers are all never lost. But when you lose the whole server, all that accumulated investment gets lost. I know a lot of this stuff is trivial, like I had written a small Python script to grok the RTE Pulse page for the current show title, and use that to tell the streamripper doing the recording what the name of the current show is. Thus I can constantly record RTE Pulse, and play back specific shows at work. As much as I could rewrite that in a few hours, it is a few hours of my time to debug the thing. And my non-sleep non-work non-childcare hours are an exceptionally scarce resource. It is extremely likely that much of this lost infrastructure, I won’t be restoring, because most of it was a convenience rather than a necessity – taking RTE Pulse again as an example, I know the shows I like the most, and they all are on mixcloud, so I can just manually go there for each of them.

Anyway, obviously enough I have taken measures to prevent this ever happening again. This website is now being served from a €5/month dual core Intel Atom C2338 @ 1.74Ghz dedicated server with 4Gb RAM and a 128Gb SATA SSD. It is very severely underpowered, it runs at a fraction of the speed of my preceding eight core Intel Atom C2750 dedicated server for €11/month. But here’s the key thing: I now have two of those servers, so for the same money, I get failover redundancy, albeit with far less CPU grunt (half the total CPU cores running at two thirds the clock speed). Because these little servers are so underpowered, and I am making them run ZFS on root because I am a mean person, I’ve had to disable PHP processing entirely – this is now back to being a 100% static website, just like it was in the 1990s . You readers probably won’t notice the difference – the only missing bit is the visitor counter at the top, which used a bit of PHP and a SQLite database (also lost). I do feel that loss a bit, I had visitor counts per page since the 1990s. But given that nobody since the 1990s bothers with that any more, I doubt the loss will be noticed.

Even with this now being a pure static website, ZFS is so much work for these tiny Atom CPUs that storage bandwidth is quite impacted. For incompressible data:

  • Raw 128Gb SATA SSD: ~470Mb/sec read, 340Mb/sec write (it’s a Sandisk X400 SSD, a four year old TLC design).
  • Unencrypted LZ4 compressed: 348Mb/sec read, 244Mb/sec write (approx -35% over raw, but usually most data compresses well, in which case this compression yields a net gain).

This in turn badly hurts the 1Gbps NIC, as served by nginx, tested from a nearby server:

  • Raw network can achieve ~100Mb/sec i.e. RAM to RAM via nginx.
  • Cached file content @ 80Mb/sec @28%user 37%system 34%idle (approx -20% over raw).
  • Uncached file content requiring i/o and LZ4 decompression @ 59Mb/sec: 22%user 41%system 37%idle (approx -41% over raw).

During that last benchmark, one of the two Atom CPUs is maxed out, the other is fairly idle, so basically the NIC is being throttled by the lack of single core compute available. In the end though, three fifths of a gigabit is probably enough for most people only wanting to pay ~€5/month. And, because we shall be load balancing web requests across both servers, that’s twelve tenths of a single gigabit server i.e. +20% more available bandwidth, for the same money.

Anyway, time for bed methinks. I hope y’all are doing well, and you weren’t worried by here disappearing!

Saturday 11 July 2020: 01:19. My current phone, a HTC 10, which I picked up as new but from clearance stock from eBay in February 2018 for €295, has been getting closer to its end of life recently. Its battery won’t make the day any more, though if you charge it whenever you get into the car, into work, etc, it’s still fine (though it recently just died at 70% battery when I was taking a lot of photos quickly, when it was very cold). Normally I’d probably stick it out for another six months, wait until its three years old before replacing it, but another worry is that its USB-C socket has become loose, so it’s increasingly 5050 whether it charges at all when I plug it in before I go to sleep, and I’m well aware that if it stops being able to charge, I lose all my data on it. Another factor is that Megan’s phone, a Samsung Galaxy S7 also new from clearance stock on eBay in October 2018 for a particularly bargain €210, has a battery slightly worse than my phone already (she uses her more!, plus it has a less power efficient chipset), and it’s less of my scarce free time for me to replace two phones at the same time. So it’s time for new phones for both of us!

For the record, both the HTC 10 and Galaxy S7 phones have been great choices. As models abandoned by their manufacturers for software updates due to being end of life, both ran LineageOS from the beginning, mine with MicroG replacing Google Services, hers with standard Google services. There are minor design failings in each, no doubt (the HTC 10’s display is its weakest feature relatively speaking; the S7’s camera is its weakest feature), but both phones are small yet with QHD displays, and nobody makes phones both physically small and with high definition displays any more .

Normally speaking, I would have gone for the Galaxy S9, which as it’s no longer in production, is end of life and which you can buy as new but from clearance stock from eBay right now for under €300 delivered. It’s a great phone for that money, it has official LineageOS support so it’ll be trouble free, and it’s almost exactly the same width as the HTC 10 or S7, just slightly taller. It’s overwhelmingly the rational choice if you want a high end phone for great money which has a first class LineageOS experience. Anyone sensible should buy that phone as a very reasonably priced high end LineageOS phone.

And of course I, not being rational, didn’t do that. I went for the Galaxy S10 instead, currently available new on eBay for under €500 delivered thanks to the recent price cuts due to the S20 release. Is the S10 66% better than the S9? Absolutely not: sure, the S10 has twice the RAM, twice the storage, twice the number of cameras, +33% more CPU grunt, +66% more graphics grunt, fancy in-display fingerprint reader, an even better HDR+ display than the S9, and much louder speakers than the S9. But its LineageOS support is still a work in progress, and the ‘decent cases’ story is really terrible for the S10, which doesn’t seem to have anything like the cases choice that the S10e, S10+ or S10 lite have.

Nevertheless, I still chose the S10 over the S9 for three reasons: (i) I like to listen to the radio when in the shower, and the HTC 10 just isn’t quite loud enough, and the S9 is about the same loudness as the HTC 10 (ii) I have a sneaking suspicion that older batteries, even if unused, don’t last quite as long as newer unused batteries (iii) I don’t take many photos, but when I do I take panoramas. The dedicated wide field panorama lens therefore appealed to me.

All that said, if I were you, I’d choose the S9, unless you like installing quirky LineageOS betas. Don’t get me wrong, three months from now the S10’s firmware story will likely be far better. Almost without doubt, the most rational high end LineageOS phone to buy next year will be the S10, and that was also a factor in choosing the S10 over the S9.

Comparing the HTC 10 to the Samsung Galaxy S10

I’m about to do something really unfair, and review both phones comparing them against one another. The HTC 10 went on sale in April 2016, whereas the S10 went on sale March 2019. Three years separate these former flagships from both companies. Is there any doubt which will win?


My HTC 10 runs LineageOS Android 9, whereas the Galaxy S10 runs LineageOS Android 10. What few differences between those two Android versions there are I have so far found meaningless. Result: Draw.


My HTC 10 is plenty swift for most things. The Camera app is slow, but that’s because I’m running a hacked Pixel 3 camera on it, which works just fine on my older Qualcomm DSP, just at a fraction of the speed of the Pixel 3’s Qualcomm DSP. Can’t say I care though, it takes pictures just fine, just with a bit of lag. And said pictures are very, very good (as we shall see later). But for general day to day use, it’s very rare I could find myself frustrated with the HTC 10’s speed. Its Snapdragon 820 didn’t have the heat nor throttling problems of immediately preceding Qualcomm CPUs. Anything I ever tried with it, including games and VR, ran absolutely fine.

Now, as much as I just said that the HTC 10 always felt fast, it wasn’t until I used the S10 did I realise just how much snappier the UI could be. Per-core, the S10 is about 70% faster, and it is very noticeable when using the phone. Result: S10 win.


Yeah this isn’t even a contest. The HTC 10 has what was even in its generation only a rather good Sharp IPS LCD panel. Not terrible by any means, but not class leading at the time: Megan’s S7 AMOLED panel easily beat it back then, even with its uneven coloration and blow out of blues. The S10 meanwhile, well I’ll straight out say that it’s the finest display that I have ever seen or used, on any device ever including pro workstation displays. Unlike earlier AMOLED displays, the HTC 10’s is ridiculously accurate (no blowouts or colour hues overdone), whilst simultaneously having this really deep richness and gamut. Colour is somehow simultaneously understated AND detailed and fine. My 2019 Dell XPS 13 laptop has a fine HDR 4k panel with 80% DCI-P3 gamut, but comparing the same photos side by side on both displays the S10’s display (113% DCI-P3!) just blows that right out of the water. No comparison: the S10 has the best display ever seen in mass production, period. Result: S10 win


I haven’t tested headphones yet on the S10, though the HTC 10’s headphone DAC would be very hard to beat: it can drive very high current headphones with ease, and is widely regarded as one of the best headphone drivers ever made. On speakers which I have tested across multiple days, the S10’s stereo speakers can reach far louder volumes than the HTC 10’s stereo speakers, I will without doubt be able to listen to radio in the shower. Neither phone distorts audio at maximum volume.

But do you know something? The S10’s speakers are tinny. Perfectly clear, but there is absolutely no bass. Whereas in the HTC 10, the bottom speaker is a ‘bass’ speaker, in so far as such a thing is possible in such a small space. But do you know, it makes all the difference. Radio from the HTC 10 is much richer, fuller, pleasant sounding. Male voices in particular sound much better. The HTC 10’s speakers are as crap as the S10’s for music however, only on radio are they clearly superior. Sorry Samsung, I know the S9 was far, far better than preceding phones for the speakers (the S7’s single speaker is awful), but the S10 still falls far short of the HTC 10. Result: HTC 10 win


Perhaps surprisingly both phones have almost identical main camera units: both 12MP, both almost identical field of view, both optical image stabilised. I took some photos earlier today from my office, and I’ve got to be honest, there is very little between them in bright sunshine, even zooming into the pictures real close. When the HTC 10 launched, it was lauded for its camera which was much lower resolution than the competition at the time, but its much larger sensor pixels gave far superior low light performance. Samsung copied the idea for the S9, which had only a 8MP camera, and thus after a few incremental evolutions weirdly the S10 ended up exactly where the HTC 10 was at three years ago. And taking a picture just there in the almost-dark, both cameras still perform about the same – maybe, just maybe, the S10 is marginally better despite its smaller sensor pixels, but it does have a larger aperture to let in more light. Result: Draw.

Let me be very clear here: the S10’s camera absolutely blows away the S7’s camera. Megan and I often noted just how shit the S7’s camera was compared to my HTC 10’s, with her even going so far as to deliberately use my phone if the photos were important. That’s just how good the HTC 10’s camera was, and at least now we know the S10’s camera is no worse.

I should also mention that the S10 is using OpenCamera, which uses the generic AOSP Camera2 API, whereas the HTC 10 is using a hacked Pixel 3 camera, which is Qualcomm and Google proprietary and consistently wins the annual camera phone reviews. So the comparison isn’t entirely fair.

Fingerprint reader and buttons

The S10 has a fancy ultrasonic fingerprint reader built into the screen, whereas the HTC 10’s is an ugly slatted thing in the hefty bezel below the display. I’ve got to be honest, both work well. The S10’s had a reputation for being laggy, but perhaps that was early firmwares, I’ve found it not noticeable. Its usable surface area is a little small though, and it’s not entirely obvious where to exactly put your finger always. Whereas the HTC 10’s fingerprint reader ‘just works’, and doubles as a ‘home button’ in addition to the other two hardware buttons next to it for back and switch apps. I know it’s ‘not cool’ to diss bezel-less phones, and yes the S10 has a screen reaching almost entirely from the top to the bottom of the phone. But most of my time is spent clicking and moving around apps rather than watching content (and for which the HTC 10’s aspect ratio is just fine for typical widescreen content in any case), and I hate to say it, but everything is just a touch more fluid in that department on the HTC 10 than on the S10 which has the screen do everything.

And oh, there is one other major difference: the HTC 10 has its volume button on the right, so it’s available for use with a folio case closed. The S10, for no good reason, has the volume button on the left, hidden beneath the hinge of your case, so you have to open the case to change the volume. Which sucks. Between both of those differences: Result: HTC 10 win


Holding both phones in your hand, naked, they are surprisingly similar. I know that earlier I said that the S10 is taller, and it is, but really there is barely anything in it: the case you choose would make more difference. They are almost identical width, the S10 very marginally less so. The S10 is noticeably thinner, but the HTC 10 is all aluminium and doesn’t feel as plasticky. They feel about the same weight, both with more weight towards the bottom to aid balance, and the HTC 10 having more mass towards its centre, whereas the S10 has more mass around its edges. I know nobody uses their phones naked, they always have a case on, so to be honest I’m calling them so close that the case makes the difference. Result: Draw


I think that coming from the S7, Megan will find the S10 pretty much better in every single area. I think that she’ll be very pleased with the upgrade, because on every individual measure, the S10 is better than the S7, and as it ought to be coming from Samsung.

Coming from the HTC 10, the picture is more mixed, as you’ll notice by the draw in the results above. I really wish the speakers produced better quality sound whilst still being louder: my Dad’s high end iPhone just blows all our phones out of the water for speaker quality, and I don’t understand why Samsung can’t achieve the same in their flagships. I would strongly prefer the button layout of the HTC 10, I have no idea why Samsung chose the left side for the volume button.

So I’m giving up more than Megan will be, particularly on audio. I therefore think I’ll miss my HTC 10 in some aspects, despite the three years of evolutionary distance. I remember feeling a similar loss when I transitioned from my Huawei Nexus 6P which was another great phone. I summarised my thoughts about leaving it for the HTC 10 at the time. Preceding the 6P was the Nexus 5 which I still have, and unlike all my other preceding phones, is still working well. Maybe due to being manufactured by LG? Still, whilst a good phone, the Nexus 5 wasn’t a great phone like the 6P and HTC 10 were. I’d even throw the Nexus 4 into the ‘great phone’ category, I only used it a bit because it was Megan’s phone, but it was showstoppingly good in its day, and I remember it remained competitive in terms of CPU and display even years after she got it. And all those phones were far better than the original Samsung Galaxy Nexus which was very expensive at the time and not very good, except for its early AMOLED display and its outstanding build quality which puts even the S10 to shame even today.

Going forth, given that you can’t get clearance unused Pixel phones at sufficient discount, I can see Samsung Galaxy or Xiaomi devices being the only high end LineageOS choice from now on, with a possible surprise dark horse for OnePlus devices. HTC have pretty much given up on making great phones. Huawei don’t seem to provide bootloader unlocking for recent devices, nor do whatever the company is making those very nice Nokia branded devices nowadays. Sony as always are all over the place, and the uncertainty means very patchy LineageOS maintainers. OnePlus’s recent devices look competitive, but like Google they don’t currently dump to eBay heavily discounted clearance stock of unused devices, so they aren’t price competitive for older devices with similar spec to Samsung or Xiaomi. Xiaomi devices currently trail in specs to Samsung devices, and they tend to not do 1440p displays, and even then the displays they use are much inferior to Samsung’s. But I can easily see them catching up next few years. Shame actually that Huawei don’t allow bootloader unlocking, as their devices are good competitors for Samsung’s right now. But, equally, they’re no cheaper for the same spec to Samsung at dump prices, and Samsung devices do always reliably draw in lots of LineageOS maintainers. So I can see myself and Megan going to the Galaxy S30 next, then the Galaxy S50, and so on. A one trick pony, but I’m very sure that Samsung will ensure they remain competitive going forth.

#htc10 #s10 #galaxy_s10

Click here to see older entries

Contact the webmaster: Niall Douglas @ webmaster2<at symbol>nedprod.com (Last updated: 2019-03-20 20:35:06 +0000 UTC)