jueves, 24 de octubre de 2013

Final Learning Log

This post is about most of the topics we learned in the assessment class. I just wanted to share some thoughts extracted from our theoritical framework and the recommendations for a teacher in our consultancy work. Note it is not a trick; I just wanted to share this and have a memory learning log for future subjects or studies.



       The assessment and the development of foreign language teaching and learning are firstly determined by pedagogical constructs that teachers reflect and apply on their practices (Brown, 2000). In any teaching context, most students’ affective filter is the first factor affected directly by their teachers, since they are the first contact with a foreign language, and learning a new issue is affected and permeable by pupils’ emotions and feelings (Krashen, 1981). In some cases, students show reluctance towards the teaching environment owing to some inaccurate teachers’ assessment constructs and practices.
Regarding these constructs, a paramount item to ponder is the discrepancy between evaluation and assessment (Scanlan, 2012). The former stems on the quantitative characteristic of measuring student outcomes at the end of a course, that is to say, a summative feature. In this type, we can observe the presence of a formal and convergent practice to assess students, which means they are aware of the evaluation itself that comes with boundaries or specific demands ( for grading, students’ reports, promotion and institutional ranking ). As consequence, student reluctance relies on teachers’ practices of evaluation as the unique way of measuring knowledge and performance, seen as a punishment that reflects negative, partial and discrete results.
Therefore, Scanlan (2012) and Brown (2000), consider a broader concept of assessment in learning teaching practices to ameliorate and complement the evaluation process. Then, assessment is shown as the qualitative and progressive measurement of learners’ performance in a continuous process, as Genesee and Upshur add (1996), at all times in instructional and non-instructional realms, that is to say, a formative feature. In this conception, assessment is characterized as an informal procedure to collect data of students’ achievement, or teachers’ effectiveness of planning and instruction so they can provide positive feedback on students’ performance. Thus, this process allows pupils to improve any particular ability, and teachers to make decisions to change objectives, purposes, plans and instructions when needed.
Now then, these decisions are the result of an on-going observational and reflective process of classroom environment and teaching performance. This is a process of systematization of gathered information that helps teachers improve their everyday practice, that is to say, their methodology and general performance. For this purpose, the observational practice should be focused on a specific scope  for all sessions and  limited to certain features of  teaching acts for each class, since “teaching is a complex and dynamic activity, and during a lesson many things occur simultaneously, so it is not possible to observe all of them” (Richards & Farrell, 2011, p. 90).
To manage this complexity, we can point out Genesee and Upshur (1996), who elucidate a four-step process of teaching and learning that play an important role in classroom-based evaluation and in our observation task. The first step to have in mind is identifying purposes; the second one is collecting information; the third step is interpreting the information, and the fourth one stems on making decisions. Furthermore, these authors propose a strategy for making decisions, by comparing those steps from input factors until learning outcomes in order to see the mismatches and to meet a solution based on reflection.
Then, we underpin our observation with these theories, which have been discussed in class. That is why this analysis covers the meaning of assessment on teaching methodology, bearing in mind input factors, instructional purposes, instructional plans and outcomes. As Genesee and Upshur advise (1996), teachers should focus on incongruences between the previous items, in other words, make a match between: student needs attitudes, abilities and instructional objectives; lesson planning and aims; class implementation and planned lessons, or outcomes with objectives and input factors. All of these reflections can be useful to give feedback, make decisions and changes in any of the stages before, in the spot and for future classes.
Thus, the awareness of classroom observation and self-assessment, and the consolidation of students’ abilities are arduous tasks of reflexive practitioners and transformative intellectuals.  On the contrary, if there is not reflection or changes in any of the stages, we can consider teachers as passive technicians enhancing inactive banking education (Kumaravadivelu, 2003, Chapter 1). This way, the conclusions of this analysis will contribute to our own teaching practice and the teacher observed, taking into account successful aspects in class and drawbacks. Apart from all the aspects aforementioned, we can consider an evaluation practice found in the classroom observed which is a test. In this sense, it is necessary to include the five testing criteria proposed by Brown (2000): practicality, reliability, validity, authenticity and washback effects.
According to this taxonomy, a test is practical when it does not cost a lot, has appropriate time constraints and a time efficient scoring evaluation procedure, and is easy to administer.  Reliability consists of considering students’ physical and psychological status, test administrator’s ability and rater’s scoring performance. To prove the validity of a test,  we can think of  the congruence between the content of tests and that taught in class,  the comparison between the different assessed performances of students,  the  relationship between teachers’ theoretical constructs and test designing,  its impacts on  learners and students’ perceptions of the test. Then, authenticity claims for natural language, contextualized items, interesting situations, real world tasks and sources. Finally, testing can cause effects on teaching and learning process, which are called washback effects.
Furthermore, Genesee and Upshur (1996) recommend choosing and devising tests according to the level of student proficiency. In our case, these authors advise teachers to design close-ended tests or highly structured tasks with multiple choice questions, intended to evaluate receptive skills (reading and listening) for beginning levels.  The questions should be composed by simple concise valid stems, and suitable balanced stem-related distractors or choices so tests can be appropriate or authentic, understandable and feasible or attainable. To prove this, test may be constructed, edited, tried out and revised to be reliable and valid. These authors also suggest educators must grade, having in mind two types of scoring: holistic and analytic to make teachers evaluate overall performances and apply a rubric with specific criteria, respectively.
         However, evaluators and test makers must consider that tests are not a way of making high-stake decisions (Mckay, 2012), since success in language learning is not predicted by any test but achieved with ‘appropriate self-knowledge, active strategic involvement in learning and strategic-based instruction’ (Genesee and Upshur, 1996, p. 44). When assessing young learners, McKay (2012) suggests considering cognitive, social, emotional and physical growth and the learning environment, pondering the kind of language program: second or foreign. This way, teachers can elide the negative effects from the assessment power relationships, and stop perpetuating the position of those in power.


This is because Educational policy makers generate laws incongruent with our context because they have imposed English as a foreign language in education to accomplish the Free Trade Agreement with Great Britain and the USA, through the adaptation of the Common European Framework to our language learning realm, transformed into the National Bilingualism Plan, which favors these foreign countries (García’s talks on estándares, 2013). Based on this philosophy, they have established National Standards to fulfill  emergent external requirements and they check its implementation through standardized tests that have the power of gatekeeping and marking a difference among socio-economical strata, favoring those in power (McKay, 2008).  Although government paradoxically advises teachers to be ethical with assessment and evaluation practices, they are forced to prepare students for external assessment rather than for meaningful learning. This can be an assertive way to  make students succeed in the system, since teachers ‘ have a  hard task to influence other stake-holders since the only real influences on them are their own prejudices and personal experiences’ (cited on Lopez and Bernal, 2009, p.10).
Anyways, administrative stake-holders need to consider making changes in order to provide a meaningful learning environment and achieve successful outcomes (López and Bernal, 2009). Firstly, educational policy makers must provide coherent educational laws, contextual goals and appropriate tools and human resources for teaching and learning processes and assessment practices. As Messick (cited on López and Bernal, 2009) state, policy makers must dialogue with teachers about school and student needs before creating any law. Only this way, we can demand teachers, as mediators, to be reflective on their practices, pondering the several aforementioned factors involving assessment of teaching and learning process to make good low and high-stake decisions (McKay, 2008).


 Colombian educational ministries must prepare teachers with the appropriate knowledge for the specific area, since many teachers are randomly set in different realms, as the observed teacher that has to give an English course, regardless her major on Educational Administration. Moreover, they should establish affordable goals, since our system presents still difficulties about school enrollment, resources, and teachers’ training on methodology and assessment. This way, they can be coherent, equitable and reliable on the policies made, the budget invested and the impact of their high-stake decisions on education, in favor of all educational stake-holders, especially students, parents and teachers.

By Silvia Arias and César Cristancho

viernes, 30 de agosto de 2013

Close ended-tests in Colombia

This entry is related to the Geneese and Upshur reading on Choosing and devising tasks (1996, ps.176-196)

Regarding close-ended tests, we can refer a plethora of examples in our Colombian context: all of the ICFES exams, English official tests, psychological tests, questionnaires and surveys. They are supposed to quantitatively and partially measure or to have a sample of our knowledge  or opinion in different areas or our psyche state, respectively,  and they are useful to achieve some goals: the first exams are appropriate to do surveys  for national and international statistical ranking, to make decisions on educational planning, procedure and performance to help students improve in their learning process, to permit students ascend in the academic realm: elementary, secondary, undergraduate, graduate education and career’s performance. The second one is intended to generally measure your English skills and, thus, your success on these exams will contribute to make your dreams come true:  get a well-paid job, get a promotion in your job or field, get meaningful revenues, show-off with your English certificate, etc.  The purpose of psychological tests is not to find out what level of schizophrenia you have reached, but to see how good enough you are at handling situations, controlling your emotions and acting wisely and it works to get into jobs, master degrees and other fields. Questionnaires and surveys are used to collect data for any quantitative or qualitative research, investigation, and inquiry for any field from the most complex sciences to the lousiest demographic opinion surveys for settling down businesses nearby areas with lots of audience.
All these examinational purposes in theory are of an entire magnificence, but in practice they may end up biased, distorted or in the oblivion because our government, policy makers (most of them ignoring pedagogical issues) and their educational policies have two aims: to be gatekeepers or to establish a gate so only a certain elite can access to a higher and good education, making people with low incomes  far away from educational system or have more difficulties to access (not only due to this policy but economic, political and cultural problems) or setting apart  most vulnerable teachers from the governmental teaching staff, by administering an extremely difficult,  generic and out-of-place exams and, this way, the most flatterer influencing teachers can get a room in that staff. In sum, the aim is to use these tests as means of social and political control (Lopez & Bernal).  The second aim is to propose those nonsensical exams as obligatory requirement to get graduated or  to get a job or a promotion.  This is the case of Pruebas Saber 11°, Pruebas Saber Pro and Pruebas del Concurso Docente, all of them invented by ICFES that I have to take to access undergraduate education, to get graduated and to compete for a job in the governmental teaching staff.  
Let’s start talking about pruebas saber 11°. All of us know that this standardized exam is to partially measure the knowledge we acquired during High School, to make surveys about secondary education and to access the university life. We know that close-ended test are time-consuming regarding  purposeful and meaningful readings for eliciting   stems or questions and  effective alternative responses or distractors, but when scoring and giving results, this ends up more practical.  The main competence evaluated is reading comprehension, and this type of test is suitable for the purpose. This test control the precise way of the particular response wanted by test makers and assess a particular aspect of each subject. In theory, this exam should be appropriate, understandable and feasible for a current 11th grader related to instructional objectives and instructional activities. Moreover, this test may fulfill several points counting on some traits of assessment principle as practicality, validity, authenticity. Also, it might fulfill some of the guidelines for close-ended tasks in both stems and distractors, since stems were related to the content, with no double negatives, inadvertent cues or verbatim repetition, and assessed what was supposed.  Regarding distractors in general, they were equally hard, attractive and plausible, grammatically and semantically alike and grammatical compatible with stems and related to the readings. And there were some multiple choice questions with several correct answers, some answers were derived from other questioning items, which made test somewhat difficult but still manageable.
However some problems arise when regarding reliability and washback effects on students. The test is unreliable due to several factors.  Test makers take for granted that all students know to handle a hundred- questioned folding exam and answer it on a response sheet, by just spotting a circle with a specific kind of pencil. In my experience, it was not that easy, people just had troubles unfolding the exam (a real mayhem) and some others got confused with the answer sheet because this was unfamiliar at school.  As far as we have discussed, test takers emotional states really counts because test takers can do bad at this exam if they are sick, have family problems, did not have breakfast, get nervous and get blocked,  and so on. The test administration was not so good, because indications and time constraints were so stressing: hundreds of questions in 2 four-hour sessions only a day.  Now, the issue get more complicated when referring results, success to access undergraduate programs and schools rankings. This exam caused a very negative washback effect for both students and schools. My very former classmates got frustrated and mad when getting the results; most of them got discouraged or feel bad or illiterate, did not even want to think about university studies but finish their High School and start working because neither they could  reach the score to access state universities nor their parents would  be able to afford their undergraduates programs in private universities. As for ranking schools based on results, it has always ended up unfair because the most well-known and wealthy schools prepare their students since Kinder Garden with multiple choice test  for every single subject, even for the simplest decision making of what place to address at school at certain moments. In eleventh grade, directors and teachers choose the most accurate students to take the exam under the name of the school and make the others take it particularly. In exchange, most public schools haven’t settled these politics into their curriculum, since its aim is not to prepare students for exams but learn for life (in classes, just some go for memorization and banking education though), and all students must take it compulsorily.  It is evident these public schools don’t get a well-ranked and its students don’t get good results; just few exceptional cases get good scores and can access a higher education. 
Regarding ECAES exam or pruebas saber pro, it is just a requirement to get the undergraduate diploma whether you success or fail and an excuse to make money. I don’t see any other purpose of this exam, since it affects all the principles of assessment on test takers’ performance, except practicality. This exam is still practical because it is a close-ended task with multiple choice of all the competence and has an only open-ended question with prompt sentences to make an essay on a topic related to our field, in my case, Spanish Teaching or just pedagogy. It may also fulfill the guidelines for close-ended tasks on stems and distractors. But it was so practical as generic and nonsensical for many people from different fields of study. The close-ended task consisted of three domains: statistical, psychological and reading comprehending.  The second and the third parts were nicely presented but it was an insult for a Spanish Teacher that has read lots of literature and pedagogical books because the readings, the stems and the distractors were somehow intended for scholars. On the contrary, the statistical part was so hard and out of context because most of the taste takers studies belong to human sciences and we got nothing to do with numbers for decades. Moreover, the results caused a very negative washback on my partners, owing to their ignorance on maths and English; there was a friend of mine that got a superficial level on reading comprehension, which caused him a severe emotional trauma, he could not conceive, jumping into conclusions, that he was not good enough at reading if he was actually an acute literature reader.
Finally,  concurso docente’s exam just sucked because it was impractical, unreliable, non-valid, unauthentic and have terrible washback effects and sociopolitical consequences. Regarding practicality, although the test cost little money and was well-structured as a good close-ended test, it was impractical because there were a multitude taking this test due to the two-year test postponement by the government.  Time constraint was a paramount factor to leave questions unanswered or answered by chance, due to the difficulty of test.  This fact also affected validity, in its content and in general, because there was a huge incongruence between the test questions and human sciences teachers’ knowledge; I talked to many colleagues and they didn’t know what to answer because most of them stopped studying mathematics years ago. Going on reliability was not present in the test, since people felt so shocked to the extent to puke in the restrooms, some of them had to come from other cities (because the exam was not administered in every single city, obviously)  without having slept well or eaten well,  and so on. The administrators of tests were not prepared on giving instructions of the sections and time allotted for each part; it was chaotic. In some questions, there were more correct responses, ambiguous and, in some cases, unauthentic, especially in the psychological test, where you had to answer current pedagogical questions with illogical answers. As for washback effects, all of my partners, teachers and I felt frustrated, deceived and disillusioned to work for the government teaching staff, because it was an exam that did not  measure our knowledge, even in the literature and linguistic sections, since it was more for semiotists and philologists rather than Spanish teachers.  
But there is a political paramount issue within this problematic, and it is the fraud that some test makers and test takers did. It turns out that one month before the exam administration, test makers or some ICFES workers answered the generic exam and distributed answers, by charging a considerable amount of money to well-off or politically influenced teachers, fact questionable and dishonest. It has been the most corrupted and deplorable idea from those people to compete with a fraud among thousands of needed honest teachers, and taking advantage of that situation: if they have no political power to get a job, they just have a carte blanche to ensure their first step to get a post in the government. It is absurd that we, teachers or lectures, give talks about democracy, ethics and diplomacy and we find an abysmal incongruence among thoughts, talks and actions. And a more miserable fact is that we are competing with the same compatriots, killing and stealing each other while the government is pleased to consider people as combatting animals, and making policies to receive exaggerated budgets for itself and its hermetic elite.
To wind up, these tests has a sociopolitical aim, which is to ensure a good education for the elite and obstacle knowledge and aspirations for the working class, misleading people to misery and making them succumb to the criminal power. That’s why we, teachers, have to begin helping students develop their critical thinking so they can act and vote wisely. Also teachers should struggle, by using our mental power to influence politicians and policy makers to improve this educational system , as Messic argues (1989).



lunes, 19 de agosto de 2013

The importance of assessment done by students towards English teaching processes as a foreign language at UIS

This is my post for this week, which is about the assessment of students on teaching performance.


     Nowadays, people claim, according to the literature, that teaching practice have gone through a transforming process by new emerging pedagogical trends, which have generated a great impact in students community. However, the innovative pedagogical proposals in the daily teaching practice are still elusive. We can elucidate an accurate example of the incongruence between theory and practice: training foreign language teachers under the premise of the action approach from the Common European Framework of Reference, in which the student is the main character of the teaching-learning process. Nevertheless, some professors keep applying a traditional methodology or, in other cases, an obsolete assessment.           
        In view of the above, this problem can be noticed when university students exacerbate the expression plan of language and judge their proficiency based on grammatical and phonetic precepts. Moreover, it can be seen graduated students from state universities begin performing their teaching careers and have to cope with problems on topics to cover and assess. The first obstacle to overcome is balancing the incongruence existing between pedagogical and didactic theories learnt at school and the scope managed by state institutions and its workers. The second obstacle stems on assessment processes, which are the compendium of the reflection on the overall pedagogical performance since the instruction planning and yearly disciplinary projects until the closure ceremony, interconnecting the pedagogical model with students’ needs and difficulties. In these processes, it can be complex for teachers to do a conscientious follow-up of assessment, since it requires more time to reflect on factors: initial (objectives), procedural and final (products or outcomes) and, beyond that, the time dedicated to this purpose is not remunerated. Therefore, some professors opt for traditional summative evaluation per terms that ends up more practical and less exhausting when grading and giving a final feedback on the bulletin, but inaccurately describing learning processes (since it is reduced quantitatively with some general comments).     
         This problematic discerns that foreign language teaching, in the Colombian case, English, in public universities have not been so successful as it was expected from teaching trainings to foster the implementation of new pedagogical theories in class. Therefore, it would be of paramount importance to analyze the perspectives students have towards teaching performance. According to the literature, a lot of theory have been written about methodology, methods, teachers advises, but we have not focused that much on students viewpoint on the foreign language teaching process in our context. In general, people tend to reform materials, apply new teaching-learning strategies and implement new artifacts of mediation as ITC’s or other objects acting as input, but in few cases students’ voice has little room in the decision making process about methodology, assessment and teaching methods. At our university, we can perceive that we have only a student spokesman and a final biased questionnaire superficially evaluating teaching performance, leaving important information unsaid.    
     This is why, all of students should stand up for our rights and make our directors and professors listen to our voices, dialogue, argue and advance in real and small changes with solid arguments, talking about the problems we mention on a daily basis in our classroom but still remain in the anonymity. On the other hand, we should avoid violence, lousy or ab-hominen arguments that only trigger a conflictive environment. In conclusion, we should apply the popular Latin saying: “Vox populi vox Dei”, in other words, we have to speak up with serious proposals to make real changes in teaching assessment and in other issues that affect our institution.






jueves, 15 de agosto de 2013

Ode to Assessment

This is a poem on what we have learned before vacations--- sort of a learning log




Tabula rasa wasn’t I
Said the great Brazilian pedagogue
That, in his time, was exiled;
Now his thoughts are in vogue
But professor said once
In one of his marvelous class
If no acquaintance with the topic
Just we have to grasp it; we don’t know it


So Vygotsky states that
previous knowledge with the new one
With a peer knowing more than us
Also Vygostsky and Krashen’s i+1
In the zone of proximal development
We’ll have a great knowledge construct.


This experience of mine
Tremendous and petrifying
Can tell it is worth it
To have known the following topics


The first issue that impacts
is the great difference between two facts
Evaluation and assessment are not alike
Cause the former is to quantify
And the latter is to qualify
The former reflects on a result
The latter, the entire pedagogical route


Scanlan concerning evaluation traits
The different types he portrays
Summative to grade a final task though
Formal to make students aware and stressed
Final to value the result of the whole
or product to evaluate the outcome
convergent to have an only response.


Then we have a superficial evaluation
It’s not a great idea to leave it alone
That’s why we use the assessment form
With the diverse types that come across


Formative to follow the learning process
Giving feedback to the learners involved
with informal to make classes cool
Process to assist procedures to boot
Improving a particular ability in performance
And divergent with several answers too.


Knowing the disparity between those terms
We can move on to the next step.
Now Brown will be our next expert
To talk about principles of assessment.


Starting with practicality or effectiveness
Neither expensive is a word on tests
Nor time will become endless
Nor administering one must be a mess
Nor scoring, a time consuming mayhem.


Going on with reliability
Students’ mixed emotions concerned
When taking any subject test
But the rater matters as well
Either the criteria or the fatigue
Bearing in mind his subjectivity


Test administration also counts
To liven up the current crowd.
And test nature also minds
Even more the concern is the time.


Moving on validity principle
Being the most complex criterion
Congruence in results and purposes
Effective tests are not in the oblivion.

The first aspect: content related evidence,
Being coherent with the subject evaluated
And the expected results as consequence
with direct tests, assessing what’s associated.


The second one, criterion related evidence
Comparing results of assessment with others
Validating with a concurrent performance
Or predicting future success of test takers.


The third kind, construct related evidence
Bearing in mind theoretical constructs
Getting assessment controlled by thoughts
Can affect the validity of tests though.


The fourth is consequential validity
Considering the impacts on students
Also the social consequences indeed
Of interpretations of a test, be prudent.


Face validity is the final one
Improving learning must be the aim
If test has real familiar tasks
Items, time and directions acclaimed
By students that want a real defiance.

Continuing with authenticity
Natural language will predominate
Items, we should not isolate
Topics are meaningful , not insane
Disorder must come to an end
And tasks for aliens are not the way.


Washback, the final component
The effect on teaching and learning process
The negative effects could be opponent
To the real assessment Brown proposes.


This same author also claims
To assess linguistic abilities in several ways
As an amalgam of receptive and productive
Skills that can never be separate.


Starting with listening skills
Of which process non-observable
Can be reflected on a product;
The link with speaking skills, undeniable.


Processes flashing in your brain
Storing sounds in short memories
Speech’s type, context and content
With bottom-up or top-down processes
To interpret the message point
Recording so in the long-term memory
Keeping auditory stages joined:
surface, pragmatics and semantics mastery.


Now the categories appear
Being intensive the first one;
Perception of components is a gear
Moving forward to the other ones.


In the responsive domain
Found questions and answers short
In the selective, to scan for information
Extensive for global and gist comprehension.


If micro and macro skills are mentioned
Humboldt can help us in this section
Macro, the content plan for the meaning
Micro, for the form, the plan of expression.


In this pace, difficulties may arise
As clustering, the accurate language chunk
Redundancy, repetitions spoken to recognize
And with reduce forms, many people flunk.


Performance variables, in natural speech to get by
Colloquial language become a real chaos
Rate of delivery is a mess for slow guys
Prosodic elements we have to understand
With the flow of interaction you must run.


Linked to this skill we are to state
That speaking is not from listening far away
Speaking activities with aural ones: non-split
stimulus, target and product must be linked .


To assess this skill, there’re some basic types
Imitative speaking as the Phone pass test goes
For some, audio-lingual method, fruitless though
For others, balanced repetition make discourse accuracy thrive.


We find intensive speaking as well
With short parts of discourse, a sentence
Limited response and mechanical tasks dwell
With controlled responses in speaking tests.


Then responsive speaking is to appear
With interaction and test comprehension
Spoken prompts are not a big deal
In limited level of short conversations.


Alike this type, there’s interaction
With longer and more complex ideas;
Transactional to exchange information
Interpersonal for social relations, so near.


The last type of this productive skill
Strictly stem on oral monologues
Restricting the interaction mainstream;
Oral production, more demanding though.


The micro and macro skills of speaking
Somehow, resembles those of listening
So we can move on to the next combination
Of productive and receptive skills, that’s the question.


Reading skill is on this pace
Counting on bottom-up strategies for superficial forms
And top-down processes for semantic traits;
Schemata of social background should be a norm.


For assessing reading skills in a realistic way
We can’t see this process through other’s eyes
What we can do, to question and to infer
Formative process of comprehension as well


That’s why reading genres we must know
If academic with style we ought to cope
If job-related more specific documents involved
If personal, including images and dances as Yury Lotman taught.


We have some types of reading too
Perceptive to capture words and graphemic symbols
With bottom-up processes, some tools
To determine tasks for this axle.


Selective, the next category
To reckon grammar and lexical features
Picture cued-tasks and multiple choice,
Stimuli for short stretches, the answer.


About interactive, we can include
More extended stretches of language
The writer and the reader negotiate
The meaning that’s in texts.


Extensive reading is a real challenge
'Coz of more prolonged stretches of discourse
Global understanding , we scavenge
From books, essays and technical reports.


Finally, when assessing writing, several genres we find
Academic with formal and educational texts
Other documents related to jobs, a request
Personal, the most important one, don’t forget.


As personal writing concerns,
Don’t be oblivious of digital writing as well
Don’t punish peoples’ apparent mistakes
Cause this tendency, a new variety, creates.


There are also four basic writing types
Imitative, the basic level of this skill
where to spell correctly and reckon phonemes
a paramount plan: superficial level of Noam Chomsky.


As for intensive, other characteristics are shown
Collocations, idioms and grammar features
Focusing on the form is the issue
Context and meaning also important though.


Going on with responsive category
Logical cohesion, in paragraphs, is needed,
List of criteria and guidelines a priori
Focusing at a discourse level on context and meaning.


To wind up this section
we have extensive category
it's the most complex action
but it's not a philosophical mistery.


Succesful Management of all the processes
and strategies of writing for all purposes;
focusing on ideas and arguments is the aim
so multiple drafts, a long document, can generate.



To review micro and macros skills, you want
You can check them on page 221
From Brown’s Language assessment book
If you ever would like to reboot.


For now, the explanation is over
Don’t want deadline any longer
This was what we learned before vacations
If you want to read more, see you in the next session.

lunes, 17 de junio de 2013

Assessing speaking

       Here is a summary on the speaking assessment. I hope this glogster will help you get the point of this chapter (Brown, 2000).

domingo, 2 de junio de 2013

Testing, a way of bombing learners

This entry refers to chapter 1: Testing, Assessing and Teaching (Brown, 2000)

           As far as I recall, in my childhood I used to be good at tests. I did not yearn to be tested every single while I learned  a topic but whenever a test,  a quiz or an exam was administered  I was ready to take so.  Yet I did not like them deep inside because I would get nervous and end up exhausted due to answering those summative memory cloze tests.  At High School, the testing system was slightly different; just some new formats of testing were introduced such as multiple-choice, open questioned and formative tests .  In the first years, I felt at ease with all of the tests teachers administered and with the grades I would get since I was an outstanding student who would want to get awarded for my endeavors in the bimestrial flag ceremony. The last two years of High School, I just did not want to keep in this state of paraphernalia and decided to refuse any acknowledgment since they were nonsensical acts that did not satisfy my intellect.  It does not mean that I did bad on tests, but  at the time, I was not keen on getting the first position anymore because I got disappointed and frustrated because of the unreliability, the invalidity and the inaccuracy of tests.  Most of the times, the results did not show what I knew; they were whether so lame that insulted my critical thinking or were so difficult that screwed up my learning process. It means that this unfairness of tests generated a  negative washback effect in my academic life.
             At university I had the wrong perception that there would be a drastic change on tests. But surprisingly, there was a plethora of unreliable discourse on new ways of testing since professors gave information about the grade system and ornamented speech on learning teaching process at the beginning, but in the tests I could see that professors reproduced the same traditional patterns of evaluation and testing of Elementary and High School. Albeit, I cannot deny that there were some latent changes in testing, evaluating and assessing that modified my  enhanced  me to project a positive attitude towards assessment, such as presentations, conferences, talks, essays  and some other ways in which I could  develop my critical thinking and express my own ideas.  With this positive washback, I did not feel an automaton anymore,  and became enthusiastic with my learning process because of the sense readings, teachings and tests made in some cases.
            But still my point is that at elementary school, at High school and even at university in Colombia, educators are still administering cloze tests in language classes and in other subjects. And the skepticism that I show against this kind of test is underpinned by the following statement in the Colombian National Guidelines: “Nadie en situación real de comunicación deja espacios en blanco, ni añade palabras extrañas para que el interlocutor las detecte y las elimine, ni habla en desorden para que el otro ordene las palabras en frases coherentes” (Smith, 1994).  That is why I struggle with this kind of tests since they are nonsensical and non trascendental in real life. They are not genuine tests that can measure our language skills or can help us get by on our communicative daily life.
      To wind up, it is ridiculous to still find literature professors implementing cloze tests about readings or books that students read, analize and criticize; I consider this act as disrespectful and inaccurate for people with great mental capacities that can think beyond the borders and who are not willing to face the same memory meaningless tests that were taken at  High School. 

domingo, 26 de mayo de 2013

Asssessment or evaluation: that is the question

        This is the entry about Scanlan's concepts on assessment


     Regarding this topic, it can be pointed out that even though some dictionaries agree that assessment and evaluation can function as equivalents, Scanlan(2012) hightlights another perspective on what the differences are between these concepts. He shows a comprehensive parallel of them, uttering that assessment is the descriptive process of  judment of an entity (student), that is to say, it is depicted as a formative, process-oriented, reflective, diagnostic, flexible, individual and a cooperative way of  evalution.On the other hand, this author assures that evaluating is considered as the prescriptive way, which means, evaluation turns out as a summative, product oriented, judgmental, fixed, comparative and competitive and it concerns the instruction. Furthermore, the author includes the concept of grading, considered as a 'component of evaluation, which is a formal, summative, final and product-oriented judgment of overall quality of worth of a student's performance or achievement in a particular educational activity, e.g., a course' (Sacanlan, 2012, p. 4).
      Based on these concepts, it can be stated that   assessment in Colombian Education has been influenced somehow by these principles, since this have been noticed at High School and University.From a critical perspective, educators nowadays are suggested to materialize the concept of assessment in their class because it is an accurate and eclectic method to pounder, consider and follow student's process in a deep way. Not only does it focus on keeping track of a long learning process full of feedback, but it also comprehends a  reciprocal communication with all of the education actors such as teachers, students, fathers, educational administratives, policy makers and policies, which can become holistic approach helping improve the assessment system. Albeit, it is inexorable to regard the prescriptive method of evaluation, inasmuch Colombian Educational System has always been shaped by this concept; most of colombian institutions apply this method when educators would rather administer formal final tests to get a result of students process and to decide if they pass or fail the bimester, semester or the year. This is a sample of the postmodern behaviorism and positivism that have marked our system, principles whose main precursors were Pavlov and Skinner. 
       In Colombian or latinamerican contexts, the behaviorist stimuli response stem on grading; this is the main issue when assesment is concerned. As the vast majority of educators are positivist, they just follow the traditional pattern of evaluation. That's why students are considered in terms of numbers, that is to say, performance and achievement of students are  messured and qualified with the result of a final product. In some cases, it also means that the learning process, the knowledge acquired, the accuracy, the proggress and the ability of learning are narrowly messured in a quantitative way, which can cause drawbacks if marks are the most paramount isolated issue in assessment. Therefore, people just flock to schools conditioned to get good grades, pass exams and schooling years and, finally, get a diploma (not worthy at all, just to satisfy parents' pride) in order to be productive in this society. So, students become automata machines prepared to follow certain rules and generate a similiar monotonous product. An teachers become just troglodyte machines transmiting knowledge and preparing children for routinary works or anything else but for a meaningful passionate professional life. This  is a banking education, what Paulo Freire criticised, applied in schools to promote the utilitarism. At universities, someone might think that the portrait is far too different, but the evaluation mayhem is sort of similar. 
      But still grades are predominant if anybody would want to apply for a scholarship. Anywasys, educators should take into account grading and assessment in their evaluation process, because they should use an ecclectic and comprehensive method to give a triangulation of  information and can decide better on the education process.