I wanted to address Kurt Maly's recent comment on the problem with assessing the impacts of technology on learning when the technology itself if not stable. This problem seems closely related to the "moving target" issue which Janet Schofield raised in her post on "plans for assessment discussion." Knowing that the state of technology is always in flux (even in "stable" technological systems), and that therefore, the way in which that technology impacts the educational process is also in flux, it does become difficult to assess the impact of technology use on learning -- at least in a summative sense. What is possible, however, is to assess how the technology impacts the *process* of education (from the teacher side and from the student side), both in terms of what happens (and doesn't happen) when the technology is up and running, and in terms of what does and doesn't happen when the technology crashes. This assessment can focus on the factual details of and/or on the attitudinal variables related to the learning process. In our work on the Common Knowledge: Pittsburgh project, we have tried to get at these issues through surveys, interviews and field observations of teachers and students, both in schools which "successfuly" got up and running with their technology, and in schools where there have been a lot of technical problems. Comparing what happens in those two kinds of cases can shed light on some important issues regarding what happens when new technology is brought into education, even if we cannot actually measure changes in learning outcomes. Comments?