|
Response |
Count |
Percent |
Title |
16 |
6.3% |
Title Keyword(s) |
158 |
62.2% |
Title Words |
72 |
28.3% |
Other |
7 |
2.8% |
No response |
1 |
0.4% |
|
Words in Title Note: The last response shows the astonishing persistence of users in entering searches into any available input box. |
2. Another search option looks for titles that start with the words you type, in the order you type them. Example: searching for whom* would find For Whom the Bell Tolls. Which is the best name for this kind of search?
Response |
Count |
Percent |
Title (exact) |
23 |
9.1% |
Title begins with ... |
180 |
70.9% |
Title phrase |
40 |
15.7% |
Other |
9 |
3.5% |
No response |
2 |
0.8% |
|
Title |
The next question was changed in mid-survey to test an alternate name for this search option. Results for both are given below. With both name variants, misunderstanding of what this search would cover was more prevalent among undergraduates; these results are broken out in separate tables. Highlighted responses are those indicating the best understanding (or least misunderstanding).
3. We’re considering adding
a “Journal Title” search option. Which of the following do you think this
search would cover?
(Check all that apply)
All respondents (n=152)
|
Undergraduates only (n=22)
|
3. We’re considering adding
a “Title of Journal” search option. Which of the following do you think this
search would cover?
(Check all that apply)
All respondents (n=102)
|
Undergraduates only (n=25)
|
4. GLADIS does not use the web for its displays. GLADIS appears as words on a blank screen, as shown here. When you type your search and press ENTER, your search is executed. Which is the best name for this kind of interface?
Response |
Count |
Percent |
command-line |
86 |
33.9% |
telnet |
68 |
26.8% |
terminal-style |
20 |
7.9% |
text-based |
25 |
9.8% |
text-only |
23 |
9.1% |
Other |
16 |
6.3% |
No response |
16 |
6.3% |
|
useless |
5. Would the name "text-based" or "text-only" mean that GLADIS contains the full text of materials?
Response |
Count |
Percent |
Yes |
38 |
15.0% |
No |
202 |
79.5% |
No response |
14 |
5.5% |
85.1% of the undergraduates in this sample answered correctly that these names would not mean GLADIS contains full text. This compares favorably with 79.5% of all respondents.
6. Any other comments about Pathfinder or GLADIS?
Responses to this question are contained in a separate document, not posted on the web.
6. You are:
Response |
Count |
Percent |
UC Berkeley undergraduate |
47 |
18.5% |
UC Berkeley graduate student |
119 |
46.9% |
UC Berkeley faculty |
22 |
8.7% |
UC Berkeley staff |
25 |
9.8% |
Other |
37 |
14.6% |
No response |
4 |
1.6% |
Most of the “Other” responses were alumni (15) and members of the general public (13). |
7. What is your major or field of interest?
This table is a rough breakdown of the 225 classifiable responses in a free-form comment box. The preponderance of respondents in the humanities and social sciences probably reflects patterns of library and catalog use, rather than any bias in the survey.
Response |
Count |
Percent |
humanities |
72 |
32.0% |
social sciences, education, business |
96 |
42.7% |
biological sciences |
18 |
8.0% |
physical sciences, engineering |
28 |
12.4% |
International/area studies |
11 |
4.9% |
8. Approximately how often do you come to the library?
Response |
Count |
Percent |
Less than once a month |
17 |
6.7% |
1-5 times a month |
74 |
29.1% |
6-10 times a month |
47 |
18.5% |
More than 10 times a month |
106 |
41.7% |
No response |
10 |
3.9% |
9. Approximately how often do you use Pathfinder?
Response |
Count |
Percent |
Less than once a month |
7 |
2.8% |
1-5 times a month |
34 |
13.4% |
6-10 times a month |
47 |
18.5% |
More than 10 times a month |
159 |
62.6% |
This is my first time |
2 |
0.8% |
No response |
5 |
2.0% |
Evaluation of the online survey technique
This survey generated a statistically significant number of responses from the user population we wanted to sample, with relatively little effort. The WebSurveyor online survey tool, licensed by CDL, turned out to be an efficient way to compose the survey, publish it, collect responses, and analyze the results. Being able to do all this through WebSurveyor’s web interface was a great convenience.
Lessons learned: A seemingly simple survey can generate large amounts of information. Advance effort in thinking through the questions paid off here. However, changing question 3 in mid-survey (in order to test an alternative search name), created a separate set of results. The two sets had to be combined manually, a time-consuming process. This was worthwhile in this case but generally should be avoided. Providing a free-form comment box for respondents to indicate their subject majors/interests yielded over 225 responses that had to be classified and counted manually. It would be more cost-effective to present users with a multiple choice of broad categories and let them classify themselves.
Submitted to WAG 11/22/2004
Revised 1/5/2005