Sunday, September 1, 2024

Client Caching in Asp DotNet Core

Page caching appears to be a slightly vexed topic in ASP NET Core. I think a significant factor is that a lot of the discussions around caching fail to make the distinction between Client Side and Server Side caching. So what are these?

Client Caching

Client side caching occurs only on the browser. In chrome, if we switch to the network tab and load a page of an AspNet Website, we can see caching in action:


Static assets are all cached by default - so things like jquery, boostrap, images and css will be downloaded once and from then on fetched from the browser's local cache, which dramatically improves page load speed and reduces bandwidth. Conversely, dynamic endpoints, typically methods of a controller (here the original page /Home/Person/4819, and the listview-cache.js calls), are not cached. It is assumed that dynamic endpoints may change each call, so caching would be inappropriate.  

While this is generally true, for web based database systems there are often long lists of data, sometimes10+Mb of data, that changes infrequently and must be added to every page load. It would be good to be able to cache these lists, even though they need to be generated by a dynamic endpoint.

While it may seemunnecessary in today's high speed world, sometimes these apps are used remotely over a slow VPN connection and in that scenario every kilobyte counts.  An alternative to caching that is often adopted is AJAX partial loading, however this is not much more efficient and on slow connections can severely affect responsiveness - personally, I hate AJAX loading. The best model IMO is one that has a long initial page load, but then only short delays for small AJAX payloads and short subsequent page loads - that is, cached data.

Server Caching

Server caching is a completely different beast. Server caching does not affect bandwidth at all, but rather server load. The idea is that if a server has to perform a lot of calculations to render a page, but then that page remains valid and can be re-used, it can be cached on the server. The server middleware can then return the cached version of the page rather than using its resources in re-rendering the page.

In AspNet Core, ResponseCaching is the service that handles server side caching, and has recently been superseded (I think, although it may be augmented) by OutputCaching.  Most of my apps are internal to organisations, have less than 100 users and server load is almost negligible, so I will never care about Output/Response Caching, but if you had a high traffic public website it could be crucial to your performance tuning.

There is a very important rule when using Server Side Caching on authorized endpoints with this library: NEVER cache sensitive data that is specific to a user - cached data should be data that will be used for every authorized user on the website.

To game this out, consider user X is logged in to the site (authorized) and goes to endpoint A: https://mysite.com/myaccountsettings

Now user Y logs in and also goes to endpoint A. Rather than regenerating the page, the server serves up the cached page previouly served to user X.  

You can see why this is a problem on a number of levels - not the least being security. And this is why ResponseCaching in AspNet is completely disabled for authorised endpoints.

Will the Real Caching Please Stand Up?

I think a lot of the confusion around caching is caused firstly because very few people stop to clarify whether the conversation is around server or client caching, and secondly because ResponseCaching is essentially a server caching service that also has settings that control client caching.

Googling anything to do with Client Caching for Authorized Endpoints in AspNet Core returns a page full of references to ResponseCaching. You really have to search through the results to find anything specific to client side caching. It is easy to think, as I did, that the only way to achieve client side caching is with ResponseCaching.

Turns Out, It's Easy

So finally, after checking out all the links, lodging this request with the AspNet Core team, and actually cloning and modifying the repo to add a flag to allow Response Caching on authorized endpoints, I learned more about caching and came up with this question on StackOverflow

In the end, it's just a matter of adding the cache-control header to the response. It can also be done directly like this, but I found using an attribute decoration to be what I was after. I had assumed that the middleware would specifically remove such a header for authorized endpoints, but it turns out I was wrong.

I wasn't looking for any HTTP error code interactions, so I simplified it down to this:

using System;
using System.Text;
using System.Collections.Generic;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.AspNetCore.Mvc.Infrastructure;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Net.Http.Headers;
namespace AspMvcApp {
  // https://stackoverflow.com/questions/67901155/why-is-asp-net-core-setting-cache-control-headers-on-error-responses

  public class CacheControlAttribute : ActionFilterAttribute {
    public int DurationSec { get; set; } = 0;

    public override void OnActionExecuted(ActionExecutedContext context) {
        SetCacheControlHeaders(context.HttpContext.Response);
    }

    private void SetCacheControlHeaders(HttpResponse response) {
      response.Headers[HeaderNames.CacheControl] = $"private,max-age={DurationSec}";
    }
  }
}
And here is the authorized cached endpoint:
namespace AspMvcApp.Controllers { 
  [Authorize]
  [CacheControl(DurationSec = 2629746)]  // cache for a month
  public class ScriptController : Controller {
      [Route("Script/list-view-cache.js")]
      public IActionResult list_view_cache(string hash) {
        var scriptText= ListViewCache.GetDataTableCacheItemByHash(hash).DataSerialised;
        return Content(scriptText, "text/javascript");
      }
  }
}
which yields our cached dynamic endpoints like a charm:


But You Can Still Have Your Server Side Caching If You Want

Since I only want Client Caching, I no longer need the server side code, but before I discovered this, I had modded the existing AspNet Core ResponseCache to add the AllowAuthorizedEndpoints flag.

There is still a case for allowing server side caching, only where there is a dynamically generated resource that is used by all authorised users and can be frozen for a period of time.
If you need it, I've created a GitHub repo here. It has examples and a small amount of documentation.
Clearly this repo won't evolve with the ongoing AspNet project, so use at your own risk. It would be good if the team would see fit to include the PR, but I can't see it happening any time soon.
They have recommended using Output Caching as a more flexible alternative, so the same thing may be possible with that API. Feel free to comment if you are familiar with the OutputCaching service.

Sunday, November 27, 2022

Dropbox Escalation Steps

 Dropbox Problem Escalation Steps

This is in relation to the following problem, which reoccurs every 12 months or so for me on Windows.

I'm documenting the escalation steps here as it otherwise takes me days or weeks of back and forth with Dropbox support to work through all these steps in a scattergun manner (half the time the support rep recommends the same step that the previous rep recommended earlier).

It is good to get onto this problem early, since it gets steadily worse over time. Each time I've found fixing it produces quite a number of conflicted files, where the 'normal' file is quite old (weeks or months) and the up-to-date file that was there before resolving the issue is marked as a conflic file. Hence it is necessary to search the entire Dropbox folder, remove all the corresponding original files before renaming the confict file to the original name. Daymn.


I am running Windows 10 Pro 64 bit v1903 build 18362.535 on Virtualbox (now Parallels - same problem!) on a Macbook Pro.

I have been a Dropbox user for over 10 years, I'd say. I am a software developer.

Until recently (around the same times as the visual updates to the Dropbox window which appears after right clicking the system tray icon), Dropbox would sync up within an hour or so (Dropbox consumes too many system resources, so I leave it paused most of the time and sync at night time).

Recently, it says something like 'syncing 12,834 files . 2+ days'. There is some minor movement up and down as it indexes files but basically, it never completes syncing any more - it's probably a month now since I had a complete sync. I leave it on overnight, but in the morning very little has changed.

I'm on a fast internet connection, and have turned off all throttling in the settings. I can download a 5GB installation package in about half an hour, and the files I'm trying to sync would come to a single Gb at most. Speedtest reveals 24 ping, 20Mbps down, 17Mbps up. So the '2+ days' part is clearly absurd, unless there's some internal throttling or error taking place.

I have tried:

  • logging out of the Dropbox account and in again
  • uninstalling and reinstalling Dropbox
  • double checking that the Dropbox folder is excluded from virus scanning (Windows defender)

So, to sum up:

  • - it all worked fine till a month or two ago
  • - never gets through syncing
  • - no error messages or problems displayed

----------------------------------------------------------

Suggestion 1: Fix Hardlinks

1) Make sure the Dropbox desktop application is running

2) Click on the Dropbox icon in your menu bar and then click on the gear in the Notifications panel

3) Select 'Preferences'

4) Click on the 'Account' tab, and then click in the window under the tab (just anywhere will do)

5) Hold down the 'Alt' key and press 'h'

6) Click on the 'Fix Hardlinks' button

7) You may be prompted for your a username and password, this is your computer login credentials, not your Dropbox account ones

8) Please wait about ten minutes to see if Dropbox resumes syncing.

If the above steps don’t resolve your issue, please repeat steps 1→5, but at step 6, this time click on the “Fix Permissions” button. Operating system permissions can prevent files from being viewed or edited by unauthorized logins. Permissions can be set any number of ways and can be restored manually through a fairly simple process.

Please note that if you use “Fix Permissions” syncing will start back over. If you need to use this step, please allow for a few hours to pass and let me know if syncing becomes stuck again.

----------------------------------------------------------

Suggestion 2: Bad Files Check

Please check if your files contain an incompatible character in their name or path, because certain characters can be used on some operating systems, but not others.

If this is the case you can use Dropbox's "Bad Files Check" webpage to determine which files are not syncing due to incompatibilities:

https://www.dropbox.com/bad_files_check

In most cases you should be able to resolve the incompatibility by simply renaming the file and omitting the illegal characters. For a list of known incompatibilities and illegal characters, please visit our Help Center article:

https://www.dropbox.com/help/145

----------------------------------------------------------

Hi <Rep Name>, I've tried the bad files check multiple times, but it just sits there for an hour or more and displays a blank screen when it's finished.

I'm pretty certain anyway I don't have any bad files, I keep a tight reign on what's created on my drive, and I'm a power user so I'm familiar with the reasons bad characters creep into the file system.

----------------------------------------------------------

Suggestion 3: Manual Permission Reset

1) If Dropbox is running:

- Click on the Dropbox icon in the system tray

- Choose "Exit"

2) Press the Windows Key + R (at the same time) then type "cmd" and press enter to open the Command prompt.

3) Copy and paste the following lines into the Command Prompt, one at a time, and press RETURN after each one.

icacls "%HOMEPATH%\Dropbox" /grant "%USERNAME%":(F) /T

icacls "%APPDATA%\Dropbox" /grant "%USERNAME%":(F) /T

icacls "%LOCALAPPDATA%\Dropbox" /grant "%USERNAME%":F /T

icacls "%PROGRAMFILES%\Dropbox" /grant "%USERNAME%":F /T

icacls "G:\Dropbox" /grant "%USERNAME%":(F) /T

4) Restart Dropbox from Start -> All Programs. Please also restart your computer.

----------------------------------------------------------

Suggestion 4: Advanced Reinstall

Please follow the instructions to perform an advanced reinstall from the following Help center article for your OS:

https://www.dropbox.com/help/desktop-web/advanced-reinstall

Before running an advanced reinstall

  • Save and quit all programs accessing files in the Dropbox folder.
  • If you use selective sync, take note of the preferences you applied.
  • If Dropbox is not in the default location, note where it is located.

Windows

1) Quit the Dropbox desktop app. (To do so, click the Dropbox icon in your taskbar, click your avatar (profile picture or initials), and click Quit).

2) Uninstall the Dropbox desktop app from your computer.

3) Restart your computer.

4) Open the Windows Registry editor.

5) Double-click the folder named “HKEY_LOCAL_MACHINE” to open it.

6) Inside that folder, find the “Dropbox” and “DropboxUpdate” folders by searching in the folders named “SOFTWARE” and “WOW6432Node”. They’ll be in either or both locations.

7) Delete the “Dropbox” and “DropboxUpdate” folders. (To do so, right-click each folder and click Delete).

Note: Don’t delete the main Dropbox folder on your computer. At no point in this process should you be deleting that folder.

8) Open File Explorer search for the folder “%LOCALAPPDATA%”.

9) Inside that folder, delete the “Dropbox” folder.

Note: Don’t delete the main Dropbox folder on your computer. At no point in this process should you be deleting that folder.

10) Repeat steps 8 through 10 with the following folders, deleting the “Dropbox” folder inside each: “%APPDATA%, “%PROGRAMFILES%”, and “%PROGRAMFILES(x86)%”.

Note: If you can’t find the folders, it means they’re already deleted and you can move on to the next step.

11) Download and install the Dropbox desktop app.

Since your original Dropbox folder is still on your computer at this point, if you don’t want to create a new one, click Advanced Settings during the download prompts and click the location of your existing Dropbox folder. For example, if your Dropbox folder is located at “D:\OtherDrive\Folder\Dropbox”, choose “D:\OtherDrive\Folder”.


Friday, January 21, 2022

Experimental TrueNAS Server Build

 More and more, there is open source software appearing that will do something that used to require a very expensive piece of hardware and/or software.  TrueNAS (with TrueNAS Core, formerly FreeNAS) is one such open source project.

With a view to moving my organisation's data to TrueNAS, as a test case I decided to configure an old circa-2012 HPE Proliant SE1220 with TrueNAS. It was quite an adventure, and this naturally lead to a blog post.

At this point, I'd like to thank Ben Pridmore, of First Nations Media, for productive discussions and suggestions at all stages of this investigation, and for collaboration with the hardware issues.

I think a lot of organisations have old server hardware lying around, and if not there is incredibly cheap superseded hardware online to play around with. 

First, let's go over some terms and background, because the tech moves quickly, and perhaps, like me, this is the first time you've had to look at SAS or HBAs in any kind of depth. I'll assume you know what RAID is, since that is foundational for the whole post.

SATA

I'm sure everyone is familiar with SATA, which has been the hard drive interface of choice for a long time now. SATA stands for Serial ATA (ATA standing for Advanced Technology Attachment - hello, marketing terminology!).  SATA took over from PATA, or IDE as it was also known, with PATA being Parallel ATA or originally just called ATA before there was a need to distinguish between serial and parallel versions.

A SATA III Interface can deliver 6Gbit/s (600Mbyte/s) with SATA II half that, and SATA I half again (1.5 Gbit/s). SATA can be used only for single drives: one drive per SATA port/cable.

SAS and SAS Expanders

SAS evolved out of SCSI, and fulfils a similar role to SATA, but it's a higher end product used in servers and enterprise hardware. It has multi channels, better bidirectional throughput, higher signalling voltages (hence greater maximum cable length) and a number of other advantages, and the common speeds are currently 3Gbit/s, 6Gbit/s and 12 Gbit/s.

The key hardware item to be aware of is the SAS Expander, which is the basis of any server RAID  unit, allowing typically up to 16 SATA connections on a backplane to be connected to a single SAS cable. With that cable plugged into a compatible SAS controller, this will allow the OS to access individual drives similarly as if each were connected to the motherboard via its own SATA controller.

See these good articles/posts for details: 

    http://sasexpanders.com/faq/

    https://www.truenas.com/community/resources/dont-be-afraid-to-be-sas-sy-a-primer-on-basic-sas-and-sata.48/

The question that immediately came to mind was about bottlenecking, given that we are accessing all those drives through one cable. The above article makes the point that most mechanical drives operate at around 140Mbyte/s (1400Mbit/s), and given multiple channels and that only several drives in the array are likely to be operating at once, in general there is ample bandwidth to avoid saturation.

With SSDs however, the situation is very different. With a typical 500Mbyte/s (5Gbit/s) bandwidth, several SSDs may rapidly saturate a SAS connector. High bandwidth SAS plus low disk numbers may be necessary for smooth operation of an SSD array.

Host Bus Adapters (HBAs)

The controller cards necessary to manage a SAS expander's drives fall into two categories: HBA and RAID. A HBA card transparently connects the drives on the SAS expander to the motherboard and OS - it doesn't try to provide any management layer, caching or additional smarts. Conversely, a RAID card undertakes the management of the drives into a RAID array - this is 'hardware RAID' - and the motherboard and OS sees often only one 'logical' drive, or several, depending on how many logical drives have been set up in the RAID configuration. The RAID card manages all aspects of the RAID array and the OS is simply the 'end user', seeing what the RAID card wants it to.

An HBA or RAID card has operating firmware and a separate firmware BIOS (often referred to as the 'SAS BIOS') that can be accessed during startup (just like the motherboard BIOS - it's easy to get confused!). The SAS BIOS can be used to set up things like boot devices (for HBA) or RAID configuration (for RAID cards). 

For many models of card, the operating firmware and the BIOS can be flashed with different versions of the firmware that convert the behaviour to HBA or RAID card. However a card designed to work in one mode may not be as reliable in the other. 

SAS Card Compatibility with TrueNAS

The HBA/RAID issue is  a central one in the TrueNAS forums.  ZFS and hence TrueNAS are designed to perform with direct and full access to the disk hardware through a HBA: TrueNAS is 100% software RAID.  

This is at odds with hardware raid - it is definitely not recommended to use ZFS on top of hardware RAID:

https://www.truenas.com/community/threads/if-i-had-to-use-hardware-raid-which-option-is-more-preferable.77954/

Likewise, you can use a RAID card in JBOD mode and switch off as much RAID functionality as possible, but there will still not be direct access to the individual disks, and this is going to be a red flag.

But remember that a lot of RAID cards can be re-flashed into HBA mode. How about that option?

Unfortunately the consensus is, just because you can flash a particular card as an HBA and it appears to work, doesn't mean that it's a good idea to do so. 

TrueNAS and ZFS can drive the hardware extremely hard during data rebuilds, and this is likely over time to expose any weaknesses in the controller card. 

Here's some of the debate:

https://www.truenas.com/community/resources/whats-all-the-noise-about-hbas-and-why-cant-i-use-a-raid-controller.139/

https://www.truenas.com/community/resources/multiply-your-problems-with-sata-port-multipliers-and-cheap-sata-controllers.177/

The TL;DR; of all this is: if you don't want to roll the dice in regard to your data, buy and use a TrueNAS recommended HBA card to replace any RAID card you might have.

The LSI 9211-8i (PCIe 2.0 6Gbit/s), LSI 9207-8i (PCIe 3.0 6Gbit/s) and LSI 9300-8i (PCIe 3.0 12Gbit/s) appear to be the gold standard and available quite cheaply online.

The post states that 'the LSI 9240-8i, IBM ServeRAID M1015, Dell PERC H200 and H310, and others are readily available on the used market and can be converted to LSI 9211-8i equivalents.'

My server contained a RAID card (the HP SmartArray P212) so I ordered an LSI 9211-8i HBA card second hand online for around $60US.

Anatomy of the Server

First, let's have a quick look at the anatomy of the server in light of the above discussion.

This is a top view of the server. The top area of the picture, inside the green rectangle, is the SAS Expander - an enclosure where the 12 SATA drives go (these are 2TB 7200RPM drives).  If you look along the bottom edge of the drives, you can see the edge of a circuit board running along the entire length of the expander.  The chassis and circuit board are basically a drop-in unit. They attach to the power supply, all the SATA drives plug directly in to the circuit board, and the whole thing plugs into the rest of the server via a single SAS cable.

There's a photo from the front of the server showing the drives, following the below photo.

The next block down, inside the aqua rectangle, are eight fans - of no configuration consequence, but they are very loud on startup.

Inside the purple rectangle is the area for two processors and RAM for each (only one is installed). There is a near invisible clear plastic air-directing cover over this area, to which I've taped a couple of screws during disassembly.

The metal box inside the red rectangle is a PCI extender, containing a SAS controller card and a matched pair of hard drives for use as mirrored system drives for the server (these are also attached by cable to the SAS Expander backplane). The third photo contains detail of what's inside.



Below is the server with the PCI extender box removed. It has been flipped over 180 degrees: when fitted, the PCI connectors, seen from the top in the green rectangle, fit downwards into the two black PCI slots towards the top of the image. 

The LSI 9211-8i, in the tan rectangle, is shown fitted to the PCI extender slot. Note that the only single connection to it is the SAS cable from the SAS extender, which is the long cable with the black braided cover. Below, on the table an in the red rectangle, is the removed  HP SmartArray P212, with its memory module and battery (some RAID cards have battery backed RAM to preserve the integrity of their write cache in the event of power failure).

The dual system disks (aqua rectangle) enclosure can be seen poking out from underneath the LSI HBA card. It was tempting to try to remove these from the SAS Expander and try to plug them directly into two of the six vacant SATA ports on the motherboard, but a the enclosure had small backplane through which power was delivered and I was unsure as to what other smarts might be involved. Rather than reroute power and possibly open up a can of worms in regard to the SATA interfaces, I just left these disks alone.


Setting up TrueNAS

After fitting the LSI 9211-8i HBA card and reassembling the PCI extender chassis, I proceeded to install TrueNAS by creating a bootable USB with the latest version as instructed on the TrueNAS site.

The install went smoothly, all drives were detected, and I was able to mark both the system drives for install, ending up with a mirrored system disk configuration.

On rebooting, however, I found that the server would cycle through all the boot options and end up cycling at network boot, which from experience is where the boot cycle goes to die. I checked the motherboard BIOS and it was set to boot from the HBA card, but wasn't detecting anything bootable.

On googling this, it became clear that the problem was that an unconfigured HBA would just try to boot from the first two available drives, which were very likely to be the data drives. It was necessary to boot into the SAS BIOS and configure the boot order.

Configuring Boot Order in the SAS BIOS

At this point, I did not know what firmware version my HBA card was running and had not fired up the SAS BIOS at all. In hindsight, it would have been good to check this before commencing any operations involving the SAS expander (such as the TrueNAS installation!).

As it happened, there was a problem with the SAS BIOS on the card which prevented me from booting into the BIOS to make these configuration changes, but it appears this problem is mainly specific to HP hardware, so for now I'm going to pretend I didn't have this problem and go ahead with the boot configuration as it should have happened (and did happen once the issue was fixed). I'll return to the other problem, which required removing the HBA card and re-flashing it in another computer, in the next section.

Booting the server takes a while, and eventually the screen displays something like 'hit any key for Option ROM'. At this point, there is no message telling you what keys to hit, but you need to hit Ctrl+C to boot into the SAS BIOS. After a pause, there is a message about the LSI configuration tool, and a few more keystrokes and you are in the SAS BIOS screen.

Once in there, you'll see a single line for the SAS expander, and it's necessary to hit enter a few times to expand the disk tree (there are a few useful YouTube videos covering this whole process). Then you'll see the below.

Bay 12 and 13 here are the system disks (the highlighting obscures the details of the bottom one) and we need to mark them as boot and alternate boot using Alt+B and Alt+A. Hitting Alt-M displays a handy instructional screen showing all the special key codes.

Presumably the motherboard was previously trying to boot from Bay 0, which explains the lack of success.



After saving the config, the server booted straight into TrueNAS and after a bit of further configuration, we were up and running.

Problem with BIOS on HP Hardware and Flashing the HBA Card

As mentioned, initially I couldn't fire up the SAS BIOS. When I hit Ctrl+C, after a few seconds I got:
Fatal pci express device error B00/D09/F00 E0
Worrying that I had some PCI problem with the card, I found Googling initially suggested changing card slots. But then, luckily, I ran across this rambling but ultimately very useful post on this exact issue:


It turns out that while the latest firmware for this card (P20) works fine, the P20 BIOS is not compatible with this (and clearly a range of other) HP hardware. To get things working, it's necessary to flash the card with the P20 firmware, but the P19 BIOS! (the post has a detailed matrix of firmware and BIOS versions, reproduced below).

FWBIOSDL380 G7DL380 G6
P19P19works (old)works (old)
P20/< .07P19data corrupted(!!AVOID!!)data corrupted(!!AVOID!!)
P20/< .07P20data corrupted/DEATH on CONFIG2(!!AVOID!!)data corrupted/DEATH on CONFIG(!!AVOID!!)
P20/.07P20works/DEATH on CONFIG2(AVOID!)works/DEATH on CONFIG (AVOID!)
P20/.07P19works (THIS!)works (THIS!)
P20/< .07 means all 20.00.XX.00 versions of the firmware earlier than 20.00.07.00. BIOS versions follow a different numbering scheme, with P19 = 7.37.00.00 and P20 = 7.39.02.00 (my numbers, there might be others)

At this point, I had not been able to fire up the BIOS, so I didn't actually know what versions of firmware and BIOS were on the card. I would have to download the manufacturer's drivers and boot from a USB to probe and possibly flash the system.

Now I had another challenge: I did not have Windows installed on the system, and I was doubtful if I could get it to boot from a DOS disk.

Anyway, I duly headed off to the Broadcom site, and after a bit of searching managed to find the right files. Trying to download Asset Type 'All' broke the web site, and it took me a while to realise that I had to specify 'Firmware' to get a result.



There are basically two files for the firmware update, a '9211-81 ... FW_BIOS ... for_MSDOS_WINDOWS', and an 'Installer .. for_MSDOS_WINDOWS'. There is an IT and IR version of the firmware - it's recommended to stick with the IT version for HBA in a modern environment.
I found also that the FW_BIOS package contained all the files needed from the Installer package, so there was actually no need to download the Installer package.
I've hilighted the files I ended up using in red:


But firstly, the challenge of being able run the flash tool. I tried creating an MSDOS boot USB and loaded the DOS version of the flash tool onto it, but as suspected, the server hardware would not recognise this.

At this point, it was really not possible to use the server to boot into the flash tool without installing Windows on it. My two options were to find some really old hardware that would allow DOS boot, or to find a Windows machine that I could fit the HBA card to in order to flash it.

Luckily, I have a modern Windows PC as a spare that I sometimes use for development and gaming. Fitting the HBA card to it was easy (there's no need to attach drives to the HBA card) and I was able to boot into Windows normally.

I copied the three files above into a temporary folder, opened up a CMD window, and ran the flash tool. Using the -listall switch I was able to see immediately that (referring to the firmware matrix in the previously mentioned post) both the firmware and BIOS were at v20.

I:\upd>sas2flash.exe -listall
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved

        Adapter Selected is a LSI SAS: SAS2008(B2)

Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
----------------------------------------------------------------------------

0  SAS2008(B2)     20.00.07.00    14.01.00.08    07.39.02.00     00:09:00:00

        Finished Processing Commands Successfully.
        Exiting SAS2Flash.

At this point, I went back to the Broadcom site and downloaded the P19 version of the firmware and BIOS package. I then replaced just the .rom file (from the 'sasbios_rel' folder) in my temporary folder with the P19 version, and ran the update as below. I also ran an additional command, not listed, to delete the firmware first, but it reported errors that seemed to indicate that it was no longer necessary to run this command in the Windows versions. I would nonetheless follow the instructions on the Broadcom site here.

I:\upd>dir
 Volume in drive I is Temp Install
 Volume Serial Number is 0A7D-0D64

 Directory of I:\upd

15/01/2022  05:09 PM    <DIR>          .
15/01/2022  05:09 PM    <DIR>          ..
11/03/2016  04:30 PM           722,708 2118it.bin
19/03/2014  11:36 AM            83,159 mptbios.txt
19/03/2014  11:39 AM           201,216 mptsas2.rom
11/03/2016  04:29 PM           166,912 sas2flash.exe
               4 File(s)      1,173,995 bytes
               2 Dir(s)  10,252,136,448 bytes free

I:\upd>sas2flash.exe -f 2118it.bin -b mptsas2.rom
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved

        Adapter Selected is a LSI SAS: SAS2008(B2)

        Executing Operation: Flash Firmware Image

                Firmware Image has a Valid Checksum.
                Firmware Version 20.00.07.00
                Firmware Image compatible with Controller.

                Valid NVDATA Image found.
                NVDATA Version 14.01.00.00
                Checking for a compatible NVData image...

                NVDATA Device ID and Chip Revision match verified.
                NVDATA Versions Compatible.
                Valid Initialization Image verified.
                Valid BootLoader Image verified.

                Beginning Firmware Download...
                Firmware Download Successful.

                Verifying Download...

                Firmware Flash Successful.

                Resetting Adapter...
                Adapter Successfully Reset.

        Executing Operation: Flash BIOS Image

                Validating BIOS Image...

                BIOS Header Signature is Valid

                BIOS Image has a Valid Checksum.

                BIOS PCI Structure Signature Valid.

                BIOS Image Compatible with the SAS Controller.

                Attempting to Flash BIOS Image...

                Verifying Download...

                Flash BIOS Image Successful.

                Updated BIOS Version in BIOS Page 3.

        Finished Processing Commands Successfully.
        Exiting SAS2Flash.

I:\upd>sas2flash.exe -listall
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved

        Adapter Selected is a LSI SAS: SAS2008(B2)

Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
----------------------------------------------------------------------------

0  SAS2008(B2)     20.00.07.00    14.01.00.08    07.37.00.00     00:09:00:00

        Finished Processing Commands Successfully.
        Exiting SAS2Flash.

I:\upd>

This appeared to have worked as desired. After this, I removed the HBA card from my Windows box, reinserted it to the server, and was then able to boot into the SAS BIOS normally and make the configuration changes as outlined previously.

That's about it for this post. Before I go, I'll include one last useful link of informational videos from the TrueNAS forums. Good background:

https://www.truenas.com/community/resources/informational-videos-mostly-about-sas-hardware.105/


ADDENDUM:

After building this, of the 12 disks in the server, 6 of them were fairly quickly (ie. within a couple of weeks) knocked out of the RAID pool due to repeated warnings. Initially I wan't sure if the problem was hardware (SATA ports, SAS setup or HBA card), but the remaining drives seem stable. It's 2022 and this is a 2012 era server that has been in operation up until 2020, so it's not really surprising that the disks are starting to become unreliable - the consensus is about 3-5 years as a typical RAID disk's reliable lifespan.

I'm pretty sure if I was able to populate the server with new disks, this would fix the issues (unfortunately I don't have spare newer disks, or a budget for brand new ones).

However this hilights again how hard TrueNAS/ZFS drives the disks, and how sensitive it is in reporting and reacting to problems. I suspect in the original hardware RAID configuration, we wouldn't have even heard about problems with the disks.

TrueNAS will potentially place a lot more stress on your hardware than other RAID setups, so the hardware has to be good quality and well integrated. The payback is extremely reliable storage and near-paranoid reporting of any issues.


Saturday, August 15, 2020

Xamarin iOS Deployment - The Apple Developer Account

Deployment of Xamarin apps onto Apple devices is complicated. Not only is the Apple developer system quite complex and unforgiving, but because build requires an Apple computer somewhere in the chain, and Visual Studio essentially remote controls Xcode running on the Apple machine to function as a build server, it is not always clear where settings should be made, especially to fix any errors should they occur.

Firstly, let’s give an overview of the Apple developer account.

To distribute apps, even using ad-hoc methods to sideload, you need to enrol in the Apple Developer Program for $US99/year. Once you have signed up, you’ll have an account at deveoper.apple.com, which will look something like this:


The key page here is ‘Certificates, IDs & Profiles’. The other pages are less used, with App Store Connect being the page where you register a profile for App Store evaluation on the other pages essentially Bing support and resource pages.

Certs/IDs/Profiles


Let's look at the profiles section first. A profile is a way of gathering together all of the information needed to debug or distribute an app. We have a distribution type, we can list capabilities, we define an app ID and certificates and devices can be associated with the profile. Generally we would want one profile per app that we are going to develop. 

Note that we can download a profile and on OS X, if we double click the profile, the certificates will be installed and the devices linked against the profile will be approved for installation in ad hoc mode. If devices are added to a profile, then the profile should be re-downloaded and double clicked again.

The devices section lists the UUIDs (universal unique identifiers) of all of the devices approved for use. It's a little bit tricky to get the UUID for an iPad or iPhone (see whatsmyudid.com), and I recommend using a cable to do so rather than installing one of the various pieces of software available. It's not a good idea in principle to give an app the level of permission that's required to find the UIID, no matter how useful those apps might be.

You can list up to 100 of each type of device, and these can be used for testing or add hoc distribution. Note that you can only remove a device once every year when your account is renewed, so it's important to remove all unused devices at this time.

Editing the profile (click the profile in the ‘Profiles’ page, then click ‘Edit’ in the summary screen that appears) allows you to associate particular certificates and devices with particular profiles.

Identifiers are a handle used to identify and control access to a particular app or a range of services, including music notifications gift cards etc. The app identifier is fairly straightforward, just containing a description, a bundle ID which looks a lot like a domain descriptor (eg. com.mybusiness.-), and a list of capabilities.

Once an app marked with an identifier has been submitted for App Store evaluation, through App Store Connect, then the identifier cannot be deleted after that point, even if the submission was not approved.

The Certificate section contains a list of certificates used to sign app bundles or activate developer functions. Xcode help (help.apple.com/xcode/mac/current/) is a valuable resource in relation to how the screen works. There is a lot of automated functionality in Xcode associated with this page which can be quite confusing.

The first thing to note is that a range of certificate types are obsolete now such as the iOS development and iOS distribution certificates, Mac development and Mac distribution certificates. These have been replaced by Apple distribution and Apple development. Have a look at the range of certificate types here (https://help.apple.com/xcode/mac/current/#/dev80c6204ec). For ad hoc app development only the Apple development and distribution certificates are used.

Certificate Management in XCode

Certificates must be managed and generated in XCode as well, and will automatically load back up to the developer site, so let's have a quick look at the XCode screen under preferences > accounts > manage certificates. 

Note that the certificates are not actually stored in Xcode but are stored in the OS X keychain which is available under utilities > keychain in applications. Type ‘Apple D’ into the search bar to list just the relevant certificates. They can be deleted there if necessary. However it appears that creating certificates in the KeyChain using a Certificate Request is not to be recommended, as the private key is not saved. XCode will generate and configure the certificates automatically.

You'll notice there are some obsolete certificates plus some marked ‘not in keychain’ or ‘missing private key’. These certificates have probably been generated on a different Mac machine which has then automatically uploaded them to the central site. However the private key file is present only on the computer which generated them: it is necessary for the certificate to be exported from that computer, and then installed on the local computer, in order for the private key file to be present.

The best way to synchronise certificates, profiles, etc, between development machines is to select the AppleID in the Xcode preferences ‘accounts’ screen, and ‘Export Apple ID and Code signing Assets’ using the gear icon menu.

So just to reiterate: if a new certificate is created using XCode, that certificate is automatically stored locally in the Keychain, and is automatically uploaded to the Apple Developer website. Note that the Apple Developer Profile must now be updated locally by downloading and double clicking to ensure that the certificate is included in the build profile (the build will not succeed until this is done). But the private key used to unlock the certificate remains only on the local computer. It must be exported manually, or can be exported as one of all local certificates for the Apple ID, using the ‘Export code signing assets’ method described above.

Checklist: Adding a new Ad-Hoc device

1) add UDID to Device page

2) Go Profiles, click the profile, then click ‘Edit’ on the page that comes up. Add the device.

3) Download the new profile in Visual Studio on Windows (Tools > Options > Xamarin > Apple Accounts > View Details > Download All Profiles), or in XCode on the Mac (Preferences > Accounts > Download Manual Profiles)

Saturday, December 21, 2019

DD-WRT on Linksys E1200 as a low cost Wifi AP

This is brief post about a solution I came up with a while back. Problem: I needed three wireless networks to cover my home, home office and granny flat. I was able to run Cat5 cables to the two locations.

After some research, I decided to give DD-WRT a try as a replacement for the notoriously flaky manufacturer firmware for commodity hardware. I have always found that the factory firmware offers an interface that is inconsistent, buggy, hard to use, and often doesn't use standard naming conventions for features, so it's remarkably hard to work out what settings to use.

However none of my existing random modems supported DD-WRT, so I decided to buy something that was on the supported list.

The Linksys E1200, supporting wireless N, is around $55 AU ($35 US) and is still available now (https://www.ebay.com.au/p/177908116) , even though it's becoming scarce. Wireless N is still plenty fast enough for me.

The steps are pretty simple, just go here: https://wiki.dd-wrt.com/wiki/index.php/Linksys_E1200v2
and follow instructions.
It was easy to use my Internet Provider's modem as the main DHCP server, and just switch off DHCP and all other services on the E1200s and use them simply to create new wifi networks to extend the range of the VDSL service.
DD-WRT proved to be very reliable, consistent, and easy to use. These have served me for about 6 years now without any issues whatsoever.

At work, I note I use a cheap ($300) Netgate box running pfSense for the main router for the entire organisation, including three subnets and routing over a fixed fibre line to a secondary premises. I cannot say enough good things about pfSense, it is far better than DD-WRT but with an associated rise in hardware costs, although it is free and you can run it on a variety of hardware.

For me, DD-WRT was still the best budget home solution.

Monday, April 8, 2019

Deploying a Xamarin UWP Project to a Surface Tablet


DotNet Core

I’ve been converting all my dev frameworks over from ASP NET Webforms to ASP .NET Core MVC over the last six months. Although the level of change has been epic, the result is decidedly worth it.  

The main issue I’ve encountered is that a number of third party products, for me reporting tools such as Telerik or DevExpress, still don’t support .NET Core, so we’re stuck in some cases with .NET Standard 2 (to maintain NET Core compatibility) and .NET 4.6. However the tipping point is quickly being reached, and .NET core ports are popping up all over the place.  Especially with Microsoft's announcement that they are stopping development on 'Classic' .NET, there is no doubt that .NET Core is the way of the future.

Xamarin and UWP

I have also been building my first mobile application with Xamarin. This has been a great experience in the main, because I can reuse so much of my existing infrastructure. It is a truly smooth toolset.

As always, however, the first time deployment to each platform (UWP, iOS or Android) has many pitfalls and can take days to sort out.  
The Microsoft documentation is, while improving from the abysmal quality of the old days, still pretty substandard. The constant churn of adding and dropping features means that most of what you can find is only partially, if at all, still true, and often quite cryptic into the bargain. They do well with reference material if you know what you’re looking for already, but in terms of laying out a clear overview of the elements of the situation, and clear steps to solve a problem, not so much. For that, we need StackOverflow.

Sideloading to UWP

So the following steps will no doubt change, probably rapidly, but this is a walkthrough of how to sideload a Xamarin UWP project as of April 2019. I will *not* be using the app store - this is a line of business (LOB) app which will be loaded directly onto the device.  
iOS and Android versions of this to follow soon.

1. Choose UWP version compatibility


This didn't cause me any problems, as I have a lot of control over my endpoint devices (which are also brand new), but when Googling for solutions to other problems, there were reports of weird errors due to Xamarin code from old SDK versions not being properly updated. This is a good factor to keep in mind when troubleshooting.

2. .NET Native

I read in several places that it was important to deploy code in release mode. This turns out not really to be true. Debug mode will perform perfectly well on the target device, and can be used for testing -
I think that the reason for the recommendation centres around .NET Native.



.NET Native is an additional step in the compile process that compiles to native Windows binaries, rather than the intermediate (IL) language that .NET normally uses. There are a few things to know about .NET Native:

  • the compilation process takes up to 20 times longer than a compilation without it
  • with it, you lose the ability to use the .NET reflection features (they are dependent on IL) and certain other features (see here for a very informative article)
  • by default, .NET produces packages *without* .NET Native in Debug mode, and *with* it in Release mode (this can be a bit unexpected)
  • it is claimed to speed up the application dramatically, but the article linked above does give mixed results in speed outcomes


Since I use an ORM (PetaPoco) that relies on reflection extensively, that immediately ruled out .NET Native for me.  I note that MS Entity Framework, and I think all ORMs realistically, use reflection.

The reason I had to investigate all this was that initially I just deployed with default settings in release mode (so, in hindsight, *with* .NET Native) and my application silently crashed on startup. No error message, no crash dump, nothing. I added exception handling right back to the first lines of startup with no change in behaviour. So it appears that if the application tries to use any of the forbidden features in .NET Native mode, it immediately crashes without any kind of notification or exception reporting. Great.

The 'Enable static anaysis' flag is meant to provide a screening mechanism to warn about code issues like the above. However, I couldn't get it to work out of the box, and because of the reflection issues, I didn't bother with it in the end.  I note that I also had some permissions issues with compilation initially ('Access to the path 'C:\Program Files (x86)\Windows Kits\10\App Certification Kit\SupportedAPIs-x64.xml' is denied.'), which required me to reset the permissions on that folder - likely just a hiccup with VS or the installer.

So in summary, you may wish to tackle .NET Native, but it does add a significant amount of complexity to the compilation process. My recommendation would be to leave it all swtiched off initially, until your application is tested and deployed, and then come back to it as an additional optimisation later on, allocating significant time for assessing feasibility and then testing it.

3. Ensure the target device (tablet) is not in S Mode

'S' mode locks the device to only using Windows Store apps to provide higher security. It can be switched out of S mode easily enough with no licensing implications, but note that once you've switched, you can't go back, even with an OS reinstall.

Check control panel -> System -> About, to see if your device is in S mode.

Here are the instructions to switch.

I note that the following steps were a lot easier if I switched explorer folder options to:  
- untick 'hide extensions for known file types'
- tick 'expand to current folder'

4. Install the project certificate

- double click the .cer file in the project deployment folder
- click 'install certificate'
- choose 'local machine'
- choose 'place all certificates ...', click 'browse'
- scroll down, select 'Trusted People' (I know that sounds crazy, but this is the one)
- finish

5. Install and Start the project

 Double click the '.appx' or '.appxbundle' file, click 'install' and then when installed, 'launch'

Saturday, April 21, 2018

dotNet Data Binding – Part 1 – The Issues

(this post is from 2008)

There is still a lot of debate about Data Access Layer design in software development, and about layered architecture in general: what to do, and how to accomplish it.
This is a very broad topic and I have been struggling with it for some time; there are not only different requirements for different situations, but there are individual preferences for particular styles and there are additional requirements forced upon coders by what the software out there is capable of doing.
In these posts, I’m going to try to make some really in-depth sense of the major issues to do with data binding in .NET. The discussion and even the details of the principles involved, however, will be applicable to any language or platform capable of implementing the kind of things we’re talking about, and I’m pretty sure will be a good match for Java at least (since that’s what .NET was ripped off from … oops … don’t mention the war ….).
I develop winforms and web applications, with occasional mobile device work as well, and I’m looking for a robust, flexible layering model I can use to enable re-use of code and classes thoughout any kind of application I choose to develop.  While the web paradigm has quite different demands of the presentation side of the application from the others, I can’t see any reason why a well architected application back end wouldn’t be optimal for all purposes.
Layered Design
Layered application design has been around for a long time.  In principle, I don’t think anybody debates that it’s a good thing.  It’s the implementation which is fiercely contested.
So let’s go back the the principles of layered architecture.   If you haven’t come across layered or tiered architecture before, have a look on wikipedia here. A good article on the Model-View-Controller pattern, probably the most prevalent layering pattern in common use at present, have a look here.
I’m not going to worry too much about the details of the different competing patterns, because I want to accomodate them all – and their similarities are generally much greater than their differences.
The idea is that layered architecture is good because it allows functionality to be broken into very discrete and controlled units. 
The bottom ‘Data Access Layer’ or DAL is generally based on database tables, but could be XML, data returned by web services or any other data source.  In a layered architecture, this information is ‘wrapped’ in a series of objects which generally mirror the granularity of the data, for example there is usually one object per database table, with a data property for each column.
The DAL certainly fits into the ‘Model’ grouping of the MVC pattern.  It can be debated that the ‘model’ grouping should also contain extensions to the basic DAL necessary to implement higher functions such as status transitions of the data objects in order to keep basic data model integrity during such transitions (eg. validation of critical data elements and relationships).  This should not be business logic as such, but rather the low-level logic required for very basic interpretation and manipulation of the data by any and all calling applications – it’s the protection that the DAL needs to make sure that outside manipulation doesn’t ‘break’ its most basic functions.
I would put these extension classes in a separate group to the basic DAL classes (but it doesn’t matter overly), and they would then complete the Model grouping of the MVC pattern.
Because the basic DAL classes do tend to mirror the underlying datasource exactly, it makes sense to have an autogeneration tool to create these classes, and there are many tools out there which do just this.
The primary advantage of complete encapsulation in a data layer is that the back end storage methods can change without affecting the rest of the application at all.  All SQL and back-end-specific information is contained within the DAL, so in theory we could decide to swap RDBMS (Relational Database Management System), to change to or incorporate data obtained through web services, and many other options, without the rest of the application knowing or caring.
The classic case is in a large corporate environment where where may be several different legacy systems patched together and in the process of migration, or when in today’s takeover-ridden world, suddenly the corporation buys another company and has to integrate a separate set of data from a different RDBMS into the application.
The middle layer is the ‘Application Layer’ or ‘Business Logic Layer’.  I’m going to refer to it as a Business Logic Layer (BLL) for the rest of this discussion.  This is the ‘Controller’ grouping in the MVC pattern.
The idea is that the BLL performs all the smarts of the application.  It responds to user events, determines what to do and how to do it.  It decides what data to read from and write to the DAL and oversees the CRUD (Create-Read-Update-Delete) process, manipulating the DAL as needed.
The top ‘Presentation Layer’, or the ‘View’ grouping in MVC parlance, is concerned with presentation of information to the user in the form of application screens or web pages, and the gathering of user responses. 
‘Views’ are essentially groupings of data similar to the View or Query concept in a database system.  Views or collections of views are presented to the user in order (together with fixed structure and content) to make up a screen or page.
In the MVC pattern, there is provision for the model to update the view if underlying data changes asynchronously (eg. changed by another user), but this would presumably be via the management of the BLL.
The key issue in working out the division of logic between the BLL and Presentation Layer is to consider how the information would be presented in several different front end media, eg. WinForms, Web Page, PDA page.
Much tightly-integrated information can be displayed on a WinForm, and often a single WinForm might have to be broken down into several web pages or PDA screens to achieve the same functionality.
Does the logic facilitate this ? Does the screen logic concern itself only with the demands of creating the display page and interpreting the response ?  Does the BLL logic work at a low enough level of granularity that three quite different screen or page layouts could utilise it without modification ?
If the approporiate division of labour between these top two layers is acheived, then the great advantage is that front end presentation layers for different target technologies can be swapped in and out without major recoding.
Of course, applications as a whole are generally split into functional subsystems which can be naturally encapsulated, and the layered architecture would be realised within each subsystem.
One other topic worth touching on is the additional programming features that have evolved over the past few years which make it much easier to implement this ideal structure in the OO (object oriented) world.
Partial classes and generics are two programming innovations which have quietly snuck into .NET over the last few years. However it is little features like this which make our lives so much easier when we are trying to develop patterns like the ones we are talking about.
Partial classing allows automatic generation tools to create a set of files containing DAL partial classes.  If we want to extend or modify a class, we merely create a second empty partial class file in a separate location, and add our logic in, overriding or extending as necessary.  When automatic regeneration of the DAL classes occurs, the original files can be blown away and replaced, with the extensions sitting safely somewhere else.  Previously, we had to rely on tagging of areas of files as ‘autogen’ or ‘user’, placing any modifications of files in the ‘user’ area – on pain of losing our changes otherwise.
Generics has allowed a lot more safety surrounding the use of polymorphism. We can pass data around to re-usable classes of code while maintaining a type-safely we could only dream of previously.
Nullable Types
interfaces
In many ways, layered architecture is the natural result of the OOP principles of encapsulation and re-use.
And herein lies one of its primary other advantages – the ability to break down a problem into small, well-defined chunks with specifically written unit tests, and give them to individual members of a large development team.
Crucially, it provides a structure for developers of varying abilities and experience to contribute to a shared effort by ensuring that their component works exactly as specified.
No discussion of layering would be complete without mentioning the shortcomings of the layered approach.  We have already mentioned how it is ideally suited to the fragmented realities of large corporate infrastructures and development teams.  Exactly this advantage may make it unsuitable for small to medium sites and single-person developer teams.
Indeed, if one takes a look at all of the .NET databinding controls, they are tilted very heavily towards direct SQL access to a database by the front end pages of an application, which is diametrically opposed to the layered philosophy.  The suitability of the .NET controls for Object based data binding is very basic at best, and just might not cut the mustard when it comes to refining our layered structure (we’ll see soon …).
While many small to medium size developers would like to design according to the layered approach, in order to ‘future-proof’ the design against future changes, it’s quite common that the additional layering adds up to nothing but a substantial extra amount of short term work for dubious gain.  Suppose things don’t change, or the application becomes superseded before it is obsolete.
It is my strong belief that many aspects of the layered approach pay off only as the compexity of the application or its supporting hardware increases.  I also believe that the quite different approaches by the large and small developer camps have lead to a glossing over of the key issues in design of the varous layers.
Another major point of contention with layering results from the convenience of SQL.  There is one thing that SQL does very, very well that OOP can’t come anywhere near.  And that is aggregation of data.  Using a very clear, straightforward, english-like syntax, an experienced SQL designer can link together ten or twenty tables, and pull out the required aggregated data in an astoundingly short period of time.  At the same, kiss goodbye to encapsulation and ‘smart objects’ – SQL tables and columns are just like collections of objects and properties, stored on disk and without any code.  There’s nothing like extensive use of SQL throughout an application to get OOP and layering fanatics frothing at the mouth.
The irony is that there’s nothing intrinsically wrong with either approach, they both have great advantages not shared by the other.  But they’re so damn incompatible.  One thrives on easy connection and merging of data, the other on strict separation and control.
The OOP crew are well aware of this, and this is where things like Linq come into the equation.  These are really an attempt to provide the same easy linking and aggregation as SQL, but on a fully OO foundation, and (God bless them)trying to present it in a semi English understandable format as well !
Being a bit of an SQL guru and addict myself, I am watching from the sidelines with some degree of scepticism at the moment, but things are morphing and changing so quickly that I can’t help but be impressed.
Here is a diagram of a layered system showing the rough division of class functionality, keeping in mind that we’ll probably have a library of generic classes we like to use for our projects, as well as the project-specific classes.
Layer Diagram
Strong Versus Weakly Typed Layering Objects
This issue is at the root of much of the debate in layered design.  In the Wikipedia article for Strongly Typed Programming Language Benjamin Pierce is quoted as saying ‘The usage of these terms [ie. strongly and weakly typed] is so various as to render them almost useless’, so I’ll be quite specific about what I mean when I use these terms.
Take the example of a DAL object corresponding to a single database table.  The object (say Product) must represent the columns of the Product table (say  ProductID, ProductName, etc.) somehow.
Where the columns are exposed as untyped (ie. object type, a ‘variant’ in the old VB6 terminology)  properties, the layering is referred to as weakly typed, and where they are forced to an explicit basic type (eg. int, string, float, etc), the layering is referred to as strongly typed.
I would go so far as to say there are four possible ways of exposing the column data:
  1. The properties may simply be a name-value paired collection of object variables which can store any number of column names and values.  A column value might be accessed via a function like 
     public partial class Product        …
            public object GetColumnValue(string columnName); called like 
           ColVal = Product.GetColumnValue(“ProductID”) 
  2. Similar to (1), but with an enumerated indexor (allowing use of intellisense), eg.
            enum ProductColumnName {ProductID, ProductName, …};        public object GetColumnValue(ProductColumnName columnName); 
    ….

           ColVal = Product.GetColumnValue(ProductID)    
  3. Alternatively, the columns may be represented by individual properties.  These may be weakly typed:
            public object ProductID; 

            ColVal = Product.ProductID
  4. … or strongly typed:
            public int ProductID; 

            ColVal = Product.ProductID 
Only (4) is strongly typed. 
In fact, (1)  or (2) is how most implementations of (3) and (4) actually store the values internally; they simply add a layer of property definitions over the top to ensure that public access is explicitly named and/or strongly typed.
The points of this whole exercise, and the litmus tests, are this:
  • if a type change is made in the underlying database and propagated to the objects in bottom level of the DAL, in the strongly typed case (4), any code which uses the property in a type-sensitive way will fail on compile (since the type it is expecting from the property will no longer match the property’s new type).  In the weakly typed case, the code will have to cast the ‘object’ type it is given to the type it wants, and the code will not fail until runtime when the program is unable to cast the object to the type it wants.
  • if a column name is changed in the underlying database and propagated to the objects in bottom level of the DAL, likewise the explicitly named cases (2), (3), (4) will fail on compile, where the non-explicitly named cases will wait until runtime to throw a ‘column not found’ error.
As a result of these examples, we could add another term: explicitly named, for the layer objects.  In the situation where the column values are exposed as individual properties – (3) and (4) – they are definitely explicitly named.  Because (2) causes a compile failure (due to the changed enum value), we should really view it as a form of explicit naming as well.  This may become useful later on.
Note that we cannot have strong typing without first having explicit naming – otherwise we don’t know ahead of time which column we are going to return the type for.  However we can have explicit naming with or without strong typing.
The ability to have code ‘break’ on compile is desirable simply because the development team is notified of the bug very early in the process, and cannot actually compile until the appropriate changes (hopefully not a hack) are propagated through the system.  If the error occurs only at runtime, it will only be picked up in testing.  If the testing is not thorough, then the bug might get through to the end user before it is picked up.
The typing and naming of properties might seem like academic nit-picking at this stage of the conversation, but rest assured that later on it will become of utmost importance.
Central to the issue is that it is hard to pass explicity named and typed properties around to standardised functions and procedures who need to deal with them, since all references to them need to be hard-coded.  The only way to access the properties as a generic list is to use reflection to obtain the property metadata for the object and iterate through the list of properties and values.  This is exactly what the .NET databound components do to bind to objects.
And gee, that looks a lot like the weakly typed and named method used to access the columns.  Is it worth the extra trouble ?  Are we splinting our arms straight here and then designing a really long handled spoon to eat with ?
While explicit naming and strong typing forces people to access the objects in very safe ways, are there much simpler and possibly more powerful ways to allow this access while maintaining the important benefits of strong typing ?  Can we combine the two methods somehow ?
This is essentially what this discussion is about.
Two Cases
Before proceeding, I want to discuss the nature of the two poles of layering philosophies and find out why they both sit well within their own domains.
The ‘Large Project’ Camp
The large project is often complex in terms of the development team, the platform technologies, and often but not always the application.  In a layered architecture, there may be developers or a whole team dedicated to just the presentation layer, or just certain features in the BLL, and often a dedicated DBA administers all changes to the database system(s) and may not be involved in coding at all.  Given this scenario, it suits the development style to have the project broken down into many very small pieces.  This is the scenario where strong typing has a good mandate. 
Planning and analysis is done to some degree.  Adding a single column to the application may necessitate committee meetings and approvals, paperwork to track it, followed by small but almost identical code changes by various teams as the new column is explicitly ushered in and out of a whole host of interfaces between objects and finally used by core presentation and business logic.  The cost of doing this may run into days of time.  However, each team can be assured that they have complete control over what comes in and goes out of their part of the application, and something changed in some other layer cannot easily break their design (if this happens, something will fail to compile and be relatively easily traced back to its source outside).
This methodology locks the development team into a slow, relatively inflexible, and expensive process, but it may well be the best outcome for large corporate scenarios since there is very tight control allowing tasks to manageably be divided into fragments.
The ‘Small Project’ Camp
The small project is often staffed by a single developer, up to perhaps five developers.  There can be a widely varying amount of formal planning and analysis, but the developers usually have a good idea of what they are setting out to build.  They may use an agile methodology and consult with their client often (this is not to suggest that agile methodologies are not applicable in large organisations).
These developers often realise that their applications will benefit from a more structured approach, and in particular the unit testing demanded by the agile methodologies.  However they often percieve that this locks them into the highly structured OOP approach.  When they change or add a column to their database, are they going to want to trawl through several application layers, retyping that same column/property name and type six or seven times ?  No way !
In this small team, an individual usually knows enough about the whole application to be able to patch something new in single-handedly if required.  And if everyone’s for it, it is nice if the application can be structured so that a new database column just percolates up through the DAL and BLL by itself as part of the existing table data, rather than needing to be explicitly added to anything (this is the point where the OO fanatics choke on their coffee).  Welcome to the magic of non-explicit naming and weak typing !  It is entirely possible to define a new database column as long as it is nullable or has a default value (and hence can be left out of an INSERT statement without causing an error), which can trail along with existing table data, but not cause any problems.