Home Contact Register Subscribe to the Beacon Login

Monday, January 26, 2015

STEVE CATES: COMMON CORE – TRANSFORMING EDUCATION – MIND MAPPING STUDENTS?



 

 

Download PDF of this Article

 

UCLA Psychologists Manage/Oversee North Dakota Education

On November 4, 2014, Robert Marthaller of the North Dakota Department of Public Instruction signed a Memo Of Understanding that initiated the University of California’s management oversight of the Smarter Balanced Assessment Consortium (SBAC), the organization of which North Dakota is a member state. While this is in and of itself highly problematic it caused this author to consider just who the people now significantly in charge of North Dakota’s education policy are. The National Center for Research on Evaluation, Standards, & Student Testing (CRESST) is funded by the U.S. Department of Education (last sentence, first paragraph, of biography) and lists on their staff website 42 individuals and their respective biographies. An initial review led to the recognition that 25 of these people had strong academic roots or participation in the field of psychology. The three Co-Directors and the direct SBAC management contact, the Project Manager is also a psychologist.

Smarter Balanced Controlled by Psychologists

The majority of SBAC managers and advisors are psychology related professionals such that psychology appears a significant aspect of the organization. That, to this author seems kind of odd and upon further investigation leads to recognition that something much differnt than the usual annual test (as is required by North Dakota Century code 15.1-21-08) is what is actually being done. The ND Century Code requires annual "test" of Reading, mathematics, and science. North Dakota taxpayers will now be spending for much more than traditional measure of academic progress. The state will be doing what are termed Formative, Interim, and Summative assessments, the results of these assessments will be uploaded to the state’s “State Longitudinal Data System” for education analytics. And according to SBAC and the U.S. Department of Education, a significant aspect of this data will have a psychological basis and function.

CRESST & SBAC PSYCHOLOGISTS

 

Transforming Education

In November of 2010 a publication was made public by the U.S. Department of Education by Secretary Arne Duncan that explains in detail how implementation of the Common Core State Standards through Race To the Top would be used to “Transform Education” via the Common Core State Standards adoption by states. The report, in explicit terms, explains how, unlike assessment tests of academic mastery of the past, the new generation of such assessments will have a huge component of using technology to obtain, organize, archive, and utilize data to evaluate individual student’s cognitive function and intellectual processing capabilities.

 

Direct quotes from the November 2010 publication, Transforming American Education, National Education Technology Plan 2010, U.S. Department of Education Office of Educational Technology, Learning Powered by Technology:

http://www.ed.gov/sites/default/files/netp2010.pdf

 

Page xi:

The nation’s governors and state education chiefs have begun to develop standards and assessments that measure 21st-century competencies and expertise in all content areas. Technology-based assessments that combine cognitive research and theory about how students think with multimedia, interactivity, and connectivity make it possible to directly assess these types of skills.”

Page xvi:

Advances in learning sciences, including cognitive science, neuroscience, education, and social sciences, give us greater understanding of three connected types of human learning—factual knowledge, procedural knowledge, and motivational engagement. Technology has increased our ability to both study and enhance all three types. Today’s learning environments should reflect what we have learned about how people learn and take advantage of technology to optimize learning.”

Page 25:

Goal: Our education system at all levels will leverage the power of technology to measure what matters and use assessment data for continuous improvement.

Most of the assessment done in schools today is after the fact and designed to indicate only whether students have learned. Little is done to assess students’ thinking during learning so we can help them learn better.”

Page 26:

“I’m calling on our nation’s governors and state education chiefs to develop standards and assessments that don’t simply measure whether students can fill in a bubble on a test, but whether they possess 21st century skills like problem-solving and critical thinking and entrepreneurship and creativity.” —President Barack Obama, Address to the Hispanic Chamber of Commerce, March 10, 2009

President Obama issued this challenge to change our thinking about what we should be assessing. Measuring these complex skills requires designing and developing assessments that address the full range of expertise and competencies implied by the standards. Cognitive research and theory provide rich models and representations of how students understand and think about key concepts in the curriculum and how the knowledge structures we want students to have by the time they reach college develop over time. An illustration of the power of combining research and theory with technology is provided by the work of Jim Minstrell, a former high school physics teacher who developed an approach to teaching and assessment that carefully considers learners’ thinking.

Page 27:

Technology Supports Assessing Complex Competencies

As Minstrell’s and others’ work shows, through multimedia, interactivity, and connectivity it is possible to assess competencies that we believe are important and that are aspects of thinking highlighted in cognitive research. It also is possible to directly assess problem solving skills, make visible sequences of actions taken by learners in simulated environments, model complex reasoning tasks, and do it all within the contexts of relevant societal issues and problems that people care about in everyday life (Vendlinski and Stevens 2002).

Page 28:

Growing recognition of the need to assess complex competencies also is demonstrated by the Department’s Race to the Top Assessment Competition. The 2010 competition challenged teams of states to develop student assessment systems that assess the full range of standards, including students’ ability to analyze and solve complex problems, synthesize information, and apply knowledge to new situations.

 

Page 29:

Assessing During Online Learning

When students are learning online, there are multiple opportunities to exploit the power of technology for formative assessment. The same technology that supports learning activities gathers data in the course of learning that can be used for assessment (Lovett, Meyer, and Thille 2008). An online system can collect much more and much more detailed information about how students are learning than manual methods. As students work, the system can capture their inputs and collect evidence of their problem-solving sequences, knowledge, and strategy use, as reflected by the information each student selects or inputs, the number of attempts the student makes, the number of hints and type of feedback given, and the time allocation across parts of the problem.

 

 

 

Smarter Balanced Proposal for American Education Transformation

 

Just before “Transforming American Education” was published, Smarter Balance Assessment Consortium’s submitted a 187 page application to the U.S. Department of Education Race To the Top grant (06/23/2010). SBAC was eventually provided with over 180 million dollars to do the following.

 

http://www.edweek.org/media/sbac_final_narrative_20100620_4pm.pdf

 

 

SBAC is committed to developing an assessment system that meets all Critical Elements required by USED Peer Review, relying heavily on the Standards for Educational and Psychological Testing (AERA, APA, NCME, 1999) as its core resource for quality design. Page 33.

 

For example, Conley (personal communication, 2010) recommends that the Consortium focus on measuring a key set of cognitive strategies (problem formulation, research, interpretation, communication, and precision and accuracy) and self-management skills (time management, goal-setting, self-awareness, persistence, and study skills) that have been shown to be critical for success in college courses and technical certificate programs. Page 88

 

Psychometric research and evaluation activities will be carried out for the high-stakes summative assessments (achievement and growth measures) and the optional interim/benchmark assessments. The Consortium is committed to the use of industry-standard psychometric techniques during all phases of system development, including planning, design and development, small-scale pilot testing, ongoing field testing, score scale development, operational administration, setting of performance/achievement standards, and post administration data review. Page 89

 

(3) Item types and scoring reliability. The summative and I/B assessments will make use of technology-enhanced item types and performance events. During small-scale pilot testing, field testing, and operational administrations, the Research and Evaluation Working Group will monitor the reliability of automated and/or AI scoring of selected-response items, constructed response items, technology-enhanced items, and performance events; reliability of the scoring systems to ensure reliable data collection and to safeguard against technological problems in data collection; and development of scoring guides that detail the educational intent of each item type and how each works to collect information about students’ levels of cognitive complexity/critical thinking skills. Pg 94

 

(1)   Validity and fairness. The I/B assessments will be developed with a foundation in cognitive theory and research about how students learn over time. For this reason, the Consortium’s short- and long-term research and evaluation priorities include examining the degree to which

 

·         the I/B assessments are grounded in cognitive development theory about how learning progresses across grades and competence develops over time;

·         in that respect, the generalizability of learning progressions across various student populations (e.g., high vs. low achieving, students with disabilities [SWDs] and English learners (ELs), within and across States, will be of particular interest;

·         the I/B assessments—in keeping with the Theory of Action—elicit specifically targeted knowledge, skills, and/or cognitive processes related to college- and career readiness, per the Theory of Action, by using a reasoning-from-evidence approach during item/event development that is grounded in understanding about how students acquire competence or develop expertise in a content domain;

 

 

Selected References from the Application

American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (1999). Standards for psychological testing. Washington, DC: American Educational Research Association.

 

Mayer, R. E. (1992). Thinking, problem solving, cognition (2nd ed.). New York, NY:

W. H. Freeman.

 

McCall, M., & Hauser, C. (2007). Item response theory and longitudinal modeling: The real world is less complicated than we fear. In R. W. Lissitz (Ed.), Assessing and modelingcognitive development in schools: Intellectual growth and standard setting. Maple Grove,

MN: JAM Press

 

ETS. (2010). Using natural language processing (NLP) and psychometric methods to develop innovative scoring technologies. Retrieved June 11, 2010, from http://www.ets.org/

Media/Home/pdf/AutomatedScoring.pdf

 

Mitzel, H., Lewis, D., Patz, R., & Green, D. R. (2001). The bookmark procedure: Psychological perspectives. In G. Cizek (Ed.), Setting performance standards: Concepts, methods, and perspectives (pp. 249–281). Mahwah, NJ: Erlbaum.

 

Reckase, M. D. (2006). A conceptual framework for a psychometric theory for standard setting with examples of its use for evaluating the functioning of two standard setting methods. Educational Measurement. 25(2), 4–18.

 

Click here to email your elected representatives.

Comments

No Comments Yet

Post a Comment


Name   
Email   
URL   
Human?
  
 

Upload Image    

Remember my personal information

Notify me of follow-up comments?