First, some links on considerations for survey design and quick accessibility testing.
Given the constraints of typical museum project budgets, it's helpful to know you can get useful results with as few as five testers. Here's everybody's favourite, Jakob Nielsen, on why you can do usability testing with only five users, card sorting exercises for information architecture with 15 users and quantitative studies with 20 users. Of course, you have to allow for testing for each of your main audiences and ideally for iterative testing too, but let's face it – almost any testing is better than none. After all, you can't do user-centred design if you don't know what your users want.
There were a few good articles about evaluation and user-centred design in Digital Technology in Japanese Museums, a special edition of the Journal of Museum Education. I particularly liked the approach in "What Impressions Do People Have Regarding Mobile Guidance Services in Museums? Designing a Questionnaire that Uses Opinions from the General Public" by Hiromi Sekiguchi and Hirokazu Yoshimura.
To quote from their abstract: "There are usually serious gaps between what developers want to know and what users really think about the system. The present research aims to develop a questionnaire that takes into consideration the users point of view, including opinions of people who do not want to use the system". [my emphasis]
They asked people to write down "as many ideas as they could – doubts, worries, feelings, and expectations" about the devices they were testing. They then grouped the responses and used them as the basis for later surveys. Hopefully this process removes developer- and content producer-centric biases from the questions asked in user testing.
One surprising side-effect of good user testing is that it helps get everyone involved in a project to 'buy into' accessibility and usability. We can all be blinded by our love of technology, our love of the bottom line, our closeness to the material to be published, etc, and forget that we are ultimately only doing these projects to give people access to our collections and information. User testing gives representative users a voice and helps everyone re-focus on the people who'll be using the content will actually want to do with it.
I know I'm probably preaching to the converted here, but during Brian Kelly's talk on Accessibility and Innovation at UKMW07 I realised that for years I've had an unconscious test for how well I'll work with someone based on whether they view accessibility as a hindrance or as a chance to respond creatively to a limitation. As you might have guessed, I think the 'constraints' of accessibility help create innovations. As 37rules say, "let limitations guide you to creative solutions".
One of the points raised in the discussion that followed Brian's talk was about how to ensure compliance from contractors if quantitative compliance tests and standards are deprecated for qualitative measures. Thinking back over previous experiences, it became clear to me that anyone responding to a project tender should be able to demonstrate their intrinsic motivation to create accessible sites, not just an ability to deal with the big stick of compliance, because a contractors commitment to accessibility makes such a difference to the development process and outcomes. I don't think user testing will convince a harried project manager to push a designer for a more accessible template but I do think we have a better chance of implementing accessible and usable sites if user requirements considered at the core of the project from the outset.