To Incentivize or Not To Incentivize

As an instructor, my philosophy has always been to treat college students as adults. This philosophy generated from both my own experiences as a student and also the fact that additional freedom and responsibility will help students develop the habits they need to be successful during their academic and professional careers. As a USG colleague states in his class introduction, shared in the Chronicle of Higher Education, “…nor will I penalize you for being late to class once in a while, or even being absent… Unlike some of your other professors, I will not withdraw you from the class for excessive absences. If you want to withdraw, you’ll have to do it yourself before the deadline. Otherwise, if you simply stop coming, you’ll wind up with an F in the course.” This truly seems like the best way to manage a course and interact with our students who are – or should certainly aspire to be – adult learners.

freakonomicscover As an economist, nevertheless, I must also acknowledge the importance of incentives, a core principle in any introductory economics class. Incentives effectively have an impact on every aspect of human activity ranging from the habits of bus drivers in Chile to birth rates in Estonia. The popular books Freakonomics and Superfreakonomics, and the New York Times Freakonomics column featured entertaining analyses of the application and misapplication of incentives to a diverse range of topics including education, sumo wrestling, drug dealing, and the operation of day care centers.

 

As such, I face the philosophical dilemma of balancing my own instincts as an instructor with the importance of incentives central to my discipline.  Along the same lines, this internal debate may well be rendered mute bythe specific requirements and goals of the University System of Georgia‘s Complete College Georgia program. In particular, this program calls for an increase in the number of undergraduate degrees awarded by USG institutions and an increase in the number of degrees that are earned “on time.” Given these prescriptions, faculty at USG institutions have little choice but to take any reasonable action possible to promote student learning, including the use of incentives wherever and whenever possible and prudent.

It is in this light that I made a major change in one of my important classroom policies. I previously created a total of 27 interactive problem sets for critical topics in macroeconomics and macroeconomics. These problem sets featured graphs, formulas, equations, and – most importantly – feedback on the specific solutions for all questions. Based on the instincts that I mentioned above, the problem sets were completely optional;completing these assessments had no direct impact on a student’s grade. Given that these problem sets were accessible to students in WebCT and later in D2L for zero cost, we might assume that the majority of students who are adult learners would take advantage of these resources which would so clearly help them better prepare for ECON tests and exams.

We would be wrong.

When these problem sets were optional, only 62% of my students completed them and the average score on each assessment was 33%. In an effort to promote success through incentives, I changed my policy making these assessments required with an initial minimum score of at least 25% on each assessment. This relatively low requirement was a incentivesimpactgraphcompromise of sorts. The requirement would create an incentive but allow students to engage in these activities without undue anxiety. As a result of the requirement, the completion percentage increased substantially to 92% and the average score increased to 63%. These results were quite encouraging. My ultimate goal, however, was to improve student learning, with the gauge being students’ final exam scores. In that area, this experiment was not successful. The impact of the change from “optional” to “required” was not statistically significant.

Yet I was able to draw some conclusions on incentives. The analysis of the results did indicate that those students with scores over 50% on the problems sets did have significantly higher success rates in the course as measured by their final exam scores. While there is an issue of differentiating causation from correlation within these results, the significant relationship between success rates on the problem sets and success in the course as a whole at least suggests the potential for a benefit to students of incentivized participation, provided that the incentives and requirements offer sufficient rigor. Much in the same way that a lack of incentives can lead to less than optimal results, incentives based on standards that are too undemanding may not provide true challenges for our students and therefore not provide true opportunities for growth and success.

Tutoring Services: Outcomes and Experiences

In my seven years of tutoring STEM courses, I have witnessed every type of student imaginable – the smart, the go-getter, the “do enough to get by,” the last-minute, the “I just want answers,” and the downright lazy. The one thing these students have in common is that they all realize, sometimes a little too late, they need extra academic help, and that’s when the Tutoring Services (TS) staff members put on their superhero thinking cap(e)s, swoop in with their pens and pencils, and save them (well, most of them) from drowning in a sea of F’s, D’s, and WF’s. Realistically speaking, given the high tutor-to-student ratio, one can only imagine that there are academic casualties in this line of work. The casualties are usually those students who lack the determination to seek tutoring help, those who wait until the hours before an exam to request a lesson in weeks worth of material, and others who just simply choose not do the work. This landscape  of motivational challenges is the honest reality that my tutoring heroes and I have come — however uncomfortably — to accept.

For many of our students, even the “downright lazy” and the “I just want answers,” if they come in early enough to get help and we can identify into which of those groups they fall, we can usually help them improve their scores. Sometimes, this improvement reaches as high as a full letter grade.   How early ,then, should a student come in for help in order to fully benefit from the sessions? In the Spring semester of 2014, the Gainesville Campus (GC) tutoring staff collected first and second test grades from a number of GC math sections and cross-referenced them with the data obtained from the login computers in the tutoring labs. The results of the research showed many differences between the two groups across all math sections: Lab students and non-Lab students. Lab students are those who showed up for tutoring help, and non-Lab students are those did not. Let’s look at the results for the first and second Math1111 (College Algebra) test grades for the two groups (figure 1).

hyunh_image

Figure 1: Test 1 and Test 2 grades for Spring14. Sample size (N) = 244 for Test 1 and 214 for Test 2.

According to the results, the Lab students scored higher than the non-Lab students on both tests – an average of a 6.9-point difference on the first test and a 10.8-point difference on the second test. In addition, the Lab students had a 5.7-point improvement (76.2 to 81.9) between the first and second tests, while the non-Lab students scored only a 1.8-point improvement (69.3 to 71.1).  Obviously, one reasonable explanation for the grade improvements in both groups after the first test might include a realization on the students’ parts that they needed to study more for the second test.  The Lab students, however, improved their grades significantly more than the non-Lab students.

What could account for this major improvement in this group? One observation that I have made during my time working in the labs is that, once a student gets the courage to seek tutoring help and has a positive experience with it, s/he is more likely to return. My staff and I make a concerted effort at the beginning of each semester to explain to all new lab students the importance of staying on top of their math assignments and the vital importance of seeking help early. The sooner a student comes in and seeks tutoring help, the more likely s/he is going to improve his/her grade. Of course, we would like for students to come in as soon as the semester starts, but that is not a realistic expectation for our wide range of students. Based on the results of our small study, we should intervene as soon as the first tests are given and graded in order to obtain favorable grade improvement outcomes and decrease the D/W/F rate in Math1111. The findings call for an intervention program to be put in place to help our students, and future discussions regarding such a program need to take place between departments and administrators.

Along the spectrum of student types, the last-minute and the “I just want answers” are the students who contribute most to the D/W/F rates, and they are the most interesting. They often come in just hours before a scheduled test and expect undivided attention and one-on-one tutoring. They seem to want tutors to use magic USB cables that transfer information from tutor brains to theirs. Or, at the very least, they expect to quickly input the information for quicky output on a single test, rather than understanding that true, deep learning comes from the scaffolded and practical instruction in the classroom. My tutors are patient with these students, encouraging them to make more frequent and early visits to the labs for tutoring help for the next test. Some of the students seem to change their ways, but most do not show up for help until, one again, just hours before their tests. Tutoring sessions with these students are usually ineffective, and, for the student, quite frustrating.  My tutors understand that explaining the purposes and strategies of tutoring and of learning is a crucial component of the their interactions with all students; if the students can grasp how and why they’re learning the material, they may be more likely to seek assistance in their areas of struggle.

In addition to being tutors, TS staff members also serve as counselors and advisors. Over a period of weeks of obtaining tutoring help, many students often become more comfortable around specific tutors, and they begin to create preferences. Building a great rapport between students and tutors through sharing personal math experiences is fundamental in Lab-student retention. I believe that my tutoring staff is incredibly attentive to each student’s needs and learning style. The tutors do really care about how much the tutees actually get out of the tutoring sessions. As a tutor, it is an amazingly satisfying feeling when a student returns and tells you that s/he did well on a test because you had helped him/her to prepare. Moments like that are what make a tutoring job unbelievably fulfilling.

UNG Tutoring Resources and Information

 

Using Book Clubs to Foster Student Engagement

At it’s core, traditional teaching and learning involves students gaining meaning from printed and oral language. Raphael and McMahon (1994) suggested that comprehension is achieved through a partnership of students, peers, and instructors. Students should not sit isolated working individually. Instead, students should interact through oral and written language to aid in comprehension. The use of a book club in class provides an “opportunity for personal response to encourage students to construct meaning with their peers and to question whether meaning is inherent in text” (Raphael & McMahon, 1994, p. 103).

Outside of the classroom, book clubs abound and range in many different formats, styles, and purposes. Book clubs seemingly enhance many aspects of peoples’ lives. For example, the Get into Reading Project has set up over 50 book clubs in settings such as psychiatric facilities, care homes, homeless shelters, and libraries. These books clubs are designed to incite self-reflection and foster positive change in the participants’ lives. The adults in these programs have demonstrated remarkable changes from increased mental health to empowered life transformations.

Furthermore, the use of book clubs has been shown to boost literacy among children, adolescents, and adults (e.g., Kooy, 2003; Raphael, Florio-Ruane, & George, 2001; Ward, 2010). While increased literacy has been the primary variable used to evaluate book club outcomes, a number of other educational processes and outcomes across the lifespan have been examined. For example, Raphael & McMahon (1994) found that, at the conclusion of their book club process, elementary school students with varying reading comprehension levels and cultural backgrounds (a) developed a greater ability to synthesize information, (b) had higher standardized test scores than those in more traditional reading programs, (c) had better recall of the content of the books read, and (d) demonstrated a more sophisticated writing style over time. Additionally, homeless men have been found to be more open to counseling and talking about health-related topics while participating in a book club (Home, 2008). Others have reported a link between participating in classroom book clubs and higher-order ethical decision-making (e.g., Cohen, 2006).

I have been using book clubs in my face-to-face courses for many years. I started utilizing them for several reasons. One, I wanted students to feel more ownership in their educational process. For that reason, I do not lead the book club discussions; they are student-led. Two, I wanted students to wrestle with challenging reading and discuss how to apply the material to their lives. I typically choose books that are either evidence-based applications or that are theory-based and written by the theorist. Finally, I wanted students to be more engaged with each other. I have heard from students over the years just how meaningful these books clubs have been. I have tweaked things here and there based on their feedback, but by and large students have expressed genuine affinity for the process (but not always the books I’ve chosen).

I recently decided to collect some data from students to examine the process and outcomes more closely. I won’t share all the methodology here, but some of the qualitative data might help sway you toward thinking about this as a possibility in your own courses. First, a description of the process seems in order. I give some class time for the book club discussions to occur, and each of those assigned weeks students read a chapter from the book. Students remained in the same group for the entirety of the book club process. Each week, one student from each group led the discussion by bringing a few prepared questions for dialogue. All students in the group were instructed to formulate some ideas for each book club dialogue. Group leaders submitted their prepared questions each week to the instructor via the course learning management system site.

The following are some verbatim quotes taken from the data collected from the students:

“I found the readings valuable and was enlightened by the new perspectives.”

“I took a lot of notes and applied it to my own life. The book really enhanced my learning.”

“From a multicultural perspective, especially lower SES, we talked about how we could apply the book club’s knowledge to those populations.”

“It provided an effective way for us to process and synthesize the information. We were able to communicate on a different level.”

“We were more invested emotionally. That was great.”

“We were able to discuss some of the same concepts each week, but with a different slant based on the current reading. That helped me learn so much more than if I had just read the chapter and then discussed it in the usual classroom way.”

“It was a positive experience because it helped me a great deal to discuss the material and make learning more of an active process.”

“The book club process was extremely beneficial, as different perspectives and insights really added to the learning of the material read.”

“Hearing other students’ viewpoints helped broaden my critical thinking skills.”

Of course, not all comments were so positive, but negative responses were few and far between. One thing that really emerged from the data was that the students indeed felt more ownership in their education. In addition, they also felt more obligated to read the material and be present in class. Wow, what a win!

References

Cohen, R. (2006). Building a bridge to ethical learning: Using a book club model to foster ethical awareness. Journal of Legal Studies Education, 23, 87-103.

Kooy, M. (2003). Riding the coattails of Harry Potter: Readings, relational learning, and revelations in book clubs. Journal of Adolescent & Adult Literacy, 47(2), 136-145.

Raphael, T. E., & McMahon, S. I. (1994). Book Club: An Alternative Framework For Reading Instruction. The Reading Teacher, 48(2), 102-182.

Raphael, T. E., Florio-Ruane, S., & George, M. (2001). Book club plus: A conceptual framework to organize literacy instruction. Language Arts, 79(2), 159.

Ward, H. (2010). Teachers’ book club helps boost boys’ reading ability. Times Educational Supplement, (4891), 13.

Midpoint Course Evaluations

by Katherine Kipp, Interim CTLL Faculty Fellow – Oconee

So we have officially passed the midpoint of the semester. Hopefully, we finally ironed out all the wrinkles in our courses and have settled into a comfortable pattern that works for us and works for our students. This is the time in the semester that I like to reflect on how well things are actually working, for me, but more importantly, for my students. I think I know how they are doing and what teaching strategies are working for them, but I’ve learned over the years that I can’t base my assessment solely on my own perceptions.

In the spirit of self-reflection, I like to conduct mid-point course evaluations in each of my courses/sections. These are similar to the end of course evaluations students complete in Banner, but they can be more useful because they are immediately beneficial to the students. I’ve always found these midpoint evaluations to be helpful or, at the very least, innocuous.

My procedure is to announce to the class my intention of conducting the midpoint evaluation. I ask the students to pull out a piece of notebook paper and write down 2 questions and their responses to each: (1) what do I like about the course and want to keep? And (2) what do I dislike about the course and want to see changed. I ask them to think about their learning in the course: what have I done that has helped or not helped them learn, what are they doing that is helping or not helping them learn. I ask them to be as thorough as possible because I will be acting upon their responses. I stress that it is very important to list things they like because if I only hear from those who dislike it, their “like” might be removed from the course. I spend a few minutes explaining that I’m doing this exercise so that they can actually have some input into the course and their evaluations will matter for the rest of the semester. With the instructions clear, I ask them to write out as much as they can, and then I collect their responses anonymously. It only takes about 10 minutes of class time.

Once I have their responses it is time for data analysis and reflection. For my procedure, I make a simple table listing every characteristic of the course that was mentioned and tally up the likes and dislikes for that characteristic. Usually I find a few key items showing up on most students’ evaluations and also a list of items only mentioned by one or two students. I reflect on the opinion of the class as a whole, and on whether I agree with their assessment. Next I reflect on what changes I could make to the course to improve, based on their suggestions. One of the beauties of this evaluation method is that students often clue me in on hurdles, obstacles, and techniques that I never would have thought of on my own.

Finally, I report back to the class. I read off suggestions and announce the number of likes and dislikes. Then I talk about how I will change to accommodate their suggestions OR the fact that I won’t change and, critically, why I won’t change. The students often enjoy this class interchange, but most importantly, they see that they are partners in the learning process, that they can influence their learning environment, and that I value their input and am eager to help them succeed.

There are many ways to structure a mid-point evaluation. Some of the parameters to consider include:

–      Anonymous vs Signed: some instructors like keeping the evaluations anonymous with the thinking that students will be more forthcoming whereas others think the students are more responsible when they sign their evaluations

–      Free-form vs Structured Format: the example above was a free-form, “write about how the course is going for you” format. Another option is to use the formal structure of the end-of-course forms given in Banner or by some other format. If you have specific techniques or issues you need feedback about, it is good to add those to whichever form you use. It is always good to have at least one open-ended question regardless of your preferred format, so that your students will have a chance to give opinions that might have been missed by your form.

–      In-class or online delivery: Contrary to what we might believe about differences in delivery format, research suggests that there are no differences in students’ responses between these delivery formats (Crews & Curtis, 2011). Although the method of gathering the evaluations doesn’t matter, it is important to have the debriefing discussion about the outcomes in person, with the class.

There are many benefits to using a midpoint evaluation. Students overwhelmingly want to be heard from in course structure and content, but very few actually feel that their evaluations are acted upon or that their feedback makes a difference (Freeman & Dobbins, 2013). This evaluation convinces them that what they think IS heard and DOES make a difference. Students gain confidence in themselves and trust in you as their instructor and partner in learning. It is a great way to get feedback in a low-cost, formative way that you may not get from the end-of-the-semester summative evaluations. And research suggests it does improve teaching for the rest of the semester (Cohen, 1980; Murray, 2007).

You can get more detail about this technique from McKeachie (2013) or Whitford (2008). There are many online forms offered as well, from the University of Maryland you can get instructions and evaluation forms, and others from University of British Columbia , Indiana University, Bloomington, and The Massachusetts Institute of Technology. An especially helpful tutorial and sample forms are available here.

One final point – the most essential aspect of the mid-point evaluation is that you are willing to change something. It is not an effective technique to be used if you don’t plan to give-and-take with the students. It is also important that you don’t over-use the technique within a single course offering: students can suffer from survey-overload!

Good luck, and we are half-way there!

-Cohen, P.A. (1980). Effectiveness of student-rating feedback for improving college instruction: A meta-analysis of findings. Research in Higher Education, 13, 321-341.

-Crews, T.B. & Curtis, D.F. (2011). Online course evaluations: faculty perspective and strategies for improved response rates. Assessment & Evaluation in Higher Education, 36, 865-878.

-Freeman, R., & Dobbins, K. (2013). Are we serious about enhancing courses? Using the principles of assessment for learning to enhance course evaluation. Assessment & Evaluation in Higher Education, 38, 142-151.

-McKeachie, W.J. (2013). Teaching Tips: Strategies, research, and theory for college and university teachers (14th ed.). Lexington, MA: D.C. Heath and Company.

-Murray, H.G. (2007). Low-inference teaching behaviors and college teaching effectiveness: Recent developments and controversies. In R.P. Perry & J.C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 145-200). Dordrecht, The Netherlands: Springer.

-Whitford, F.W. (2008). College Teaching Tips. Upper Saddle River, NJ: Pearson.

Using Exam Wrappers as a Learning Tool

By: Dede deLaughter

She sits in your office, clearly fighting back tears. Pointing to the grade on her test, she says, “I’ve never made anything below a B before.”

Sound familiar? How many of us have had similar conversations with our students? And when we probe a little further, asking, among other questions, “What will you do to prepare for the next test?”, how many times do we hear, “I just need to study harder!”? One wonders, what does “study” look like in the average student’s mind, and what does “harder” look like? What if doing more of what didn’t work in the first place yields the same results? As Albert Einstein famously said, “Insanity is doing the same thing over and over again and expecting different results.”

How can we turn such occasions into a genuine opportunity for our students to learn? In their book titled How Learning Works: 7 Research-Based Principles for Smart Teaching*, Ambrose et al. provide compelling reasons to incorporate the use of exam wrappers into our curriculum. An exam wrapper is a form students fill out after receiving a graded test (or any other assessment), with the intent of guiding them through the self-correction process. The exam wrapper incorporates: reflecting on their preparation time and strategies; determining their areas of strengths and weaknesses; and identifying the types of errors they most commonly made. After students turn in their completed exam wrappers, the instructor reads through them for insight into how his/her students are studying. Then, a day or so before the next test, the professor returns the completed exam wrappers to the students in order to have a structured classroom discussion about how best to prepare for the upcoming test. This Purdue University Learning blog provides some good research on using exam wrappers as a meta-cognitive tool, and a quick Internet search of Exam Wrapper yields some good templates to adapt for your own courses. For example, this exam wrapper intentionally focuses on the main goals for using exam wrappers.

How might a guided classroom discussion go so that students can determine how best to prepare for the next test? The discussion might begin, not with a discussion of study strategies and techniques but, instead, with a conversation about their Mindset. In her pivotal work on a fixed versus growth mindset, Dr. Carol Dweck explains how our beliefs about our capacity to learn affects everything about our performance, from how we approach intellectually challenging material to how we deal with failure and criticism. Nigel Holmes has summarized Dweck’s findings in his Two Mindsets graphic. In short, students with a fixed mindset view every assignment, every assessment, every learning task as a referendum on their intelligence, which often results in minimal effort, giving up, blaming, and/or avoiding challenging subject matter. “Why would I risk being seen as deficient?” is their internal message. In contrast, students with a growth mindset relish any chance to grow their brain, viewing even failures as growth opportunities. Their internal message is “I can do this, I can learn, and my devoted effort is what results in mastery.”

Thankfully, Dweck’s research also conclusively shows that learners with a fixed mindset can develop a growth mindset when discussions about mindset are woven into the fabric of the academic environment. One discussion will not generally result in a mindset shift. Rather, the discussions and coaching need to be an ongoing, natural part of the learning environment. Once students are trained to recognize the voice of a fixed mindset and how to reframe their beliefs, their learning dramatically improves, often resulting in the deep learning we all desire for our students. Ken Bain’s book What the Best College Students Do* provides a thorough discussion of how a fixed mindset lends itself to surface or strategic learning, while a growth mindset lends itself to deep, lifelong learning.

Once students begin to embrace a growth mindset, they are then better able to determine what types of study strategies actually work for them, and they will be more inclined to experiment with different techniques in different courses. Students with a growth mindset are also better able to determine what types of study techniques are active learning strategies versus those that constitute passive, rote memorization. As we all know, memorization does not equal understanding. Let’s take flash cards as an example: once students recognize what deep learning looks like because they are willing to put forth the effort to achieve genuine learning, they are better able to offer suggestions to each other for turning the passive, almost mindless, activity of looking through a stack of index cards into active learning techniques, such as making games from their index cards.

Finally, to assist students in branching out and developing a broader repertoire of study strategies that engage their learning styles, both inside and outside the classroom, consider providing them a link to a short Multiple Intelligence assessment and then encouraging students with similar Multiple Intelligences (MI) to collaborate, using their exam wrapper and their MI results, along with the Practice tab, to devise ways to work from their areas of strength.

The more our students engage in learning about their learning, and the better we are at guiding them through this process, the more they will claim ownership of their own education. We can help restore a measure of sanity to our students’ learning by introducing them to exam wrappers and the benefits of doing an honest “post-mortem” on their previous academic efforts. (A self-guided “tour” through the Mindset and Multiple Intelligence self-assessments is available under the Grow Your Brain section on the UNG Learning Support website.)

*available to check out through CTLL

Tk20 Assessment Training for Faculty

Tk20 Assessment Training for Faculty

UNG will be using new academic assessment software (Tk20) which will improve access to assessment results and simplify the process of documenting continuous program improvement. Fall 2013 implementation will include

  • General Education Area A1
  • Selected Academic Programs
    • Bachelor of Applied Environmental Spatial Analysis
    • Bachelor of Fine Arts (Theatre)
    • Bachelor of Human Services Delivery and Administration

Program Coordinators and/or Assessment Coordinators for these programs have been contacted and are aware of the fall 2013 assessment implementation and have shared this information with their deans and department heads. This procedure will be followed each semester.

Institutional Effectiveness through CTLL will provide training on the system each semester in order to help faculty become acclimated to the reporting process. Generally, faculty will receive notification during the first weeks of the semester if their course is to be assessed. (This fall, faculty will be notified in September.)

Faculty who receive a notification email that their course is going to be assessed, should attend one of theTk20 training sessions offered for that particular semester.

For those whose courses are being assessed this year and those department leaders who want to learn these processes, the Office of Institutional Effectiveness is providing training. Faculty will be responsible for inputting their data into Tk20, the software system that will be used to document and track each program’s assessment efforts. The training sessions will provide step by step details of the reporting process.

Please register by sending the title, date, and time to:  rsvp.ctll@ung.edu.

Tk20 Assessment Training for Faculty
Wednesday, October 2, 2013
12:o0 pm – 1:00 pm
Facilitator:          Dr. Laveda Pullens, Academic Assessment Coordinator (Institutional Effectiveness)
Location:            Gainesville Campus – Nesbitt, Room 5105

Tk20 Assessment Training for Faculty
Thursday, October 3, 2013       
10:o0 am – 11:00 am
Facilitator:          Dr. Laveda Pullens, Academic Assessment Coordinator (Institutional Effectiveness)
Location:            Cumming Campus – University Center, Room 246

Tk20 Assessment Training for Faculty
Thursday, October 3, 2013       
2:o0 pm – 3:00 pm
Facilitator:          Dr. Laveda Pullens, Academic Assessment Coordinator (Institutional Effectiveness)
Location:            Cumming Campus – University Center, Room 246

Tk20 Assessment Training for Faculty
Tuesday, October 8, 2013       
10:o0 am – 11:00 am
Facilitator:          Dr. Laveda Pullens, Academic Assessment Coordinator (Institutional Effectiveness)
Location:            Dahlonega Campus – Library Technology Center, Room 162

Tk20 Assessment Training for Faculty
Tuesday, October 8, 2013       
2:o0 pm – 3:00 pm
Facilitator:          Dr. Laveda Pullens, Academic Assessment Coordinator (Institutional Effectiveness)
Location:            Dahlonega Campus – Library Technology Center, Room 162

Tk20 Assessment Training for Faculty
Tuesday, October 15, 2013       
12:o0 pm – 1:00 pm
Facilitator:          Dr. Laveda Pullens, Academic Assessment Coordinator (Institutional Effectiveness)
Location:            Gainesville Campus – Nesbitt, Room 5105

Tk20 Assessment Training for Faculty
Tuesday, October 22, 2013       
10:o0 am – 11:00 am
Facilitator:          Dr. Laveda Pullens, Academic Assessment Coordinator (Institutional Effectiveness)
Location:            Oconee Campus – Student Resource Center, Room 564

Tk20 Assessment Training for Faculty
Tuesday, October 22, 2013       
2:o0 pm – 3:00 pm
Facilitator:          Dr. Laveda Pullens, Academic Assessment Coordinator (Institutional Effectiveness)
Location:            Oconee Campus – Student Resource Center, Room 564

Facilitator Contact Information:  Dr. Laveda Pullens   |   laveda.pullens@ung.edu  |  678-717-3819  |  Room 2140, Nesbitt Building, Gainesville Campus