Friday, November 21, 2014

Instructional Design - An Annotated Bibliography

Anderson, L.W. & Krathwohl, D.R. (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.

Instructional objectives are the exposure to terminology and scaffolded practice that build towards educational objectives. Educational objectives are the mid-level competency, which is the level at which assessment should culminate. Global objectives are the connection to the workplace – what a professional in the field should be able to do, or at a minimum, the big picture goal of a degree program. It may be described in the other direction as a cognitive mapping process where global objectives are deconstructed into educational objectives and educational objectives into instructional objectives utilizing a template representing the six levels of the taxonomy: Remember, Understand, Apply, Analyze, Evaluate, and Create. Objectives are what a student should be able to do but not a specific instructional activity or assessment activity.

Dupin-Bryant, P.A. & DuCharme-Hansen, B.A. (2005). Assessing Student Needs in Web-Based Distance Education. International Journal of Instructional Technology & Distance Learning 2(1).

Student needs assessment helps the instructor plan to facilitate a course learning experience. Learning objectives may or may not already be in place when the needs assessment is carried out, but the needs assessment will help refine the instructional objectives that need to be included to determine where to start. Areas to assess student needs include: computer skills, learning styles, available resources, desired outcomes, and prior experience. Computer literacy may be taught to the entire class or just the group that needs it or integrated into other learning activities. There is a larger debate on the usefulness of learning styles inventories, but the important concept is to ensure a variety of types of content and activities are provided. Available resources are probably more important in web-based education but important in any environment – do students have the hardware, software, internet access, or otherwise to be able to participate fully in all class activities? Course objectives are one thing, but students may be looking to get something else out of the class. Always build on what the students have previous learned, whether from previous classes or from experience in the workplace.

Fisher, D. H. (2012). Warming up to MOOCs. The Chronicle of Higher Education. Retrieved March 19, 2013 from http://chronicle.com/blogs/profhacker/warming-up-to-moocs.

Hesitation to use materials available from other instructors in one’s own classroom may be due to insecurity around what others will think about using outsourced lectures and what to do with class time instead of lecturing. The author used the MOOC content as homework assignments, flipping the classroom to then allow for higher level discussions of the material, instead of just presentation of the material. By utilizing materials of other faculty and contributing back, the community of scholarship extends from the research component of the faculty role to include teaching, which is often ignored.

Fusch, D. (2012). Course materials for mobile devices: Key considerations. Higher Ed Impact. Retrieved March 19, 2012, from Academic Impressions http://www.academicimpressions.com/news/course-materials-mobile-devices-key-considerations.

People spend as much time reading on a digital screen as they do reading paper. The amount of content read through mobile devices will soon surpass what is read on a full size computer. Faculty need to consider the usability and accessibility of the learning resources they assign to ensure they can be effectively used on mobile devices. By assuming it will be accessed on a mobile device first, it’s easier to move from mobile to desktop than the other way around. Record short videos, don’t use PDFs, and break up the readings into smaller chunks. Copyright and licensing considerations are important, as different licensing may apply in the mobile realm. Using open content is one way to ensure it can be ported to other platforms.

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.

One school of thought is that students learn by discovering or constructing concepts themselves, while another school says that direct explanation of concepts and related metacognition is the most effective. The assumptions at play in the minimal guidance approach are that experiential, self-constructed learning is best. The authors put forward that an understanding of human cognitive architecture is needed to determine the most effective instructional methods. They explain that long term memory permits experts to pull from vast experience to recognize a situation and know what to do. Given the limitations of working memory, the authors explain that minimally guided instruction taxes the working memory while pushing little into long term memory. They quote Shulman’s explanation of the difference between content knowledge and pedagogical content knowledge and curricular knowledge (metacognition) as well as how one works in the field (epistemology). Studying worked examples reduces cognitive load for novice learners that are prepared appropriately for them. PBL research shows little or no gain in problem solving ability.

Reed, S. K. (2006). Cognitive architectures for multimedia learning. Educational Psychologist, 41(2), 87-98.

Six theories of multimedia learning are reviewed, the first three being memory studies and the last three in instructional contexts. Paivio’s Dual Coding Theory: visual imagery is an important method of coding concepts; dual coding refers to the use of both verbal coding and visual coding of semantic meaning. Baddeley’s Working Memory Model: verbal and visual coding, with a verbal focus on phonological learning; includes the need for a “central executive” that the learner uses to guide what modality to use in the moment; author later adds to the model the episodic buffer, where information from various modalities can be combined. Engelkamp’s Multimodal Theory: acting out what is being learned results in greater recall, as action implies understanding, assuming the action is relevant to the semantics. Sweller’s Cognitive Load Theory: Multimedia design may decrease extraneous cognitive load by integrating information that needs to be presented together, worked examples, and schemas; split-attention and redundancy effects are described. Mayer’s Multimedia Theory: utilizes recommendations from other models; seven principles – multimedia (learn better from pictures and words together), spatial contiguity (corresponding words and pictures should be close to each other), temporal contiguity (words and pictures should be presented simultaneously), coherence (extraneous words, pictures, and sounds should be excluded), modality (animation + narration > animation + text), redundancy (animation + narration > animation + narration + text), and individual differences (low-knowledge learners and high-spatial learners are more affected by multimedia presentation). Nathan’s ANIMATE Theory: visual representation through simulation to help model the solution to a problem.

Renninger, K. A. (2009). Interest and identity development in instruction: An inductive model. Educational Psychologist, 44(2), 105-118.

Instructional model that states both the interest and identity of a student are important to consider in developing learning activities. Interest relates to the desire of a learner to reengage with particular content after a previous experience. Identity is the learner’s self-representation as someone who engages with particular content. Interest needs to be cultivated and sustained throughout all stages of individual development. Interest requires some understanding of what it takes to engage, not just a baseless sense of euphoria around an interesting topic. It’s also important to consider interest in achievement, as opposed to interest in the actual content. Identity changes as learners mature and come to understand how much work is required to accomplish required education levels and goals.

Shute, V., Towle, B. (2003) Adaptive E-Learning. Educational Psychologist 38(2).

Early iterations of e-learning were concerned with simply getting information online, but the focus now is on improving learning and performance. In order to be the most effective for each individual learner, the characteristics of each learner should be assessed. One behavior a system should encourage is exploration, as students who explore and participate in optional material tend to perform better on assessments. The learner model represents what the individual learner knows, and the instructional model presents and assesses content in the most appropriate way. Adaptive e-learning should provide, not the same textbook in a scrolling page instead of a physical page, but rather dynamically ordered and filtered pages to present learners just what they need right when they need it.

Smith, L. (2003). Software Design. In Guidelines for Successful Acquisition and Management of Software Intensive Systems (4th ed.). Hill Air Force Base, UT: U.S. Air Force Software Technology Support Center.

Given the complex nature of programming, design is the key phase between gathering requirements and actual development. Design includes several iterations, including functional design (logic, desired outputs, rules, data organization, and user interface), system design (system specifications, software structure, security, and programming standards), and program design (software units, test plans, user documentation, install plan, training plan, and programmer manual). Design methods include structured design (functions and subroutines with an order), object oriented design (objects inherit from parents, changes can be pushed out to many related objects, and specifics about what happen in each object are not as important), and extreme programming (frequent code review, testing, and release iterations).

Stiggins, R. & DuFour, R. (2009). Maximizing the Power of Formative Assessments. Phi Delta Kappan 90(9). Retrieved January 31, 2013 from http://alaskacc.org/sites/alaskacc.org/files/Stiggins%20article.pdf.

Formative assessment helps to track individual student achievement and performance and drive continuous improvement. Common assessments are created by multiple faculty members teaching the same course. Common formative assessments can result in significant improvement of learning if they are specifically integrated into instructional decision making, high quality, and used to benefit student learning. Assessments may be at the classroom, school, or institutional level. No matter the level or how they are used, assessments need clear learning targets that are appropriately scaffolded and achievable, based on established standards, high quality and high fidelity, and are available in a timely and understandable form in order to help the learner do better next time. Common formative assessments can be used to determine how an individual student is doing as well as to compare classroom performance. The act of putting together a common assessment allows the conversation to happen regarding what is truly important to measure. The greater dialogue among faculty members results in a higher quality assessment than what any individual teacher might be able to create.

van Merriƫnboer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complex learning: Recent developments and future directions. Educational Psychology Review, 17(2), 147-177.

When introduced, cognitive load theory led to new types of instructional methods, such as providing many worked examples instead of problems to solve. The theory posited that long term memory is made up of schemas, which make meaning out of complex structures. Working memory is limited when dealing with new content but not limited when working with schemas from long term memory. Cognitive load theory deals with the processing of content in working memory to create schemas stored in long term memory. In order to take a dynamic approach, where instruction is automatically tailored to the learner, the knowledge of learners must be assessed and methods of promoting effective instruction for each group of learners are needed. Assessment should include ability to generate correct responses as well as the mental effort required to accomplish that.

White, B., Frederiksen, J. (2005). A theoretical framework and approach for fostering metacognitive development. Educational Psychologist, 40(4).

Metacognition is crucial in helping individuals learn through inquiry and work together in teams. Understanding how to use inquiry learning will help the learner be more effective in using inquiry learning strategies. Inquiry then includes inquiry about inquiry and inquiry about the domain of study. Advisors to help manage metacognition may be automated tutors or other learners.

Wiggins, G. & McTighe, J. (2006). Backward Design. In Understanding by Design (Expanded 2nd Edition). Upper Saddle Hill, NJ: Pearson.

When planning curriculum, it is important to pay attention to local and national standards but also to determining the specific understandings that are desired. The focus should be on learning rather than on teaching. By determining a larger purpose first, the best resources and activities can be selected to achieve the goal. Traditional design falls into the trap of providing interesting experiences that do not lead to specific desired achievements or that briefly cover all content without touching on any with enough depth to be meaningful. Start with desired goals and ask what appropriate evidence of achievement would look like and likewise what the assessment should consist of. Only after determining what the desired results are and what an appropriate assessment would look like can learning experiences be planned. Some sacred cows may be harmed in the process of ensuring all activities have a specific purpose with effectiveness in reaching our targets in mind. The textbook may become less important than its significant role in many classrooms.

Friday, October 31, 2014

Prioritization

Let's take a look at this importance-urgency matrix from Dr. Steven Covey. He talks about the need to prioritize your activities in order to manage your time effectively. You can actually keep a log of your activities throughout the day and categorize them in terms of how urgent and important they are, and you may be surprised at where you spend most of your time. People often claim they don't have time to do certain important items like planning and building important relationships, because they are always just putting out fires. It's the important/urgent items that demand our attention immediately. In between all the fires, we have all sorts of other small activities that fill in the rest of our time, but these are often unimportant items that are either forced on us by others or personal preferences and obsessions.

The trick is to prioritize properly. By focusing attention on the important but not urgent items, such as strategic planning and building key relationships in accordance with your strategy, the fires will actually put themselves out. If you have a good relationship with a customer, they'll understand when one order doesn't come through right, so while you need to fix it, it's not really as much of a fire as if you had to be worried about losing the account altogether. On the other hand, you might have a customer that doesn't fit your target demographic, who causes problems, and who you don't make much money on anyway. If you can make the strategic decision to drop that customer on whose fires you're wasting a lot of time and energy, you may come out ahead, because you can focus that attention on opportunities that will provide a better return on investment. In order to have the time to focus on strategic activities, you have to eliminate the unimportant activities that don't serve a greater purpose. Eliminate or shorten some meetings; set a schedule to check email once every few hours instead of letting it distract you as it comes in; stop creating reports that you think others need but they don't actually even look at. For an IT department, focusing on strategic aspects of the system infrastructure will help ensure projects are rolled out in a way that makes sense to support the company and possibly even utilize technology to drive new business opportunities.

The SWOT Analysis is an example of a Quadrant II activity that helps you understand where you should be focusing your time in order to be the most effective. A SWOT Analysis doesn't need to be overly structured or complicated. Spending a lot of time building a pretty SWOT template and training everyone on its use would be a good example of an unimportant activity. Put it out there and let it happen, whether you use a 2x2 matrix, bulleted lists, or more of a free-form mind map. The strengths and weaknesses are inward facing. They refer to inherent qualities of the company or department and what they're currently doing. Opportunities and threats are outward facing. They are qualities of the environment, actions of competitors, or imminent events that will have an effect on you. The goal is to build on strengths and take advantage of opportunities, while eliminating weaknesses and preventing threats from knocking you down.

In order to get a handle on where to focus attention, after brainstorming, it's important to group and rank the items you have listed. Provide additional details to determine the size of the threat or the amount of money an opportunity may be worth to you. Often there are connections between the internal and external analysis. You can leverage your strengths to take advantage of opportunities and avoid threats. Overcoming a weakness may open up new opportunities. So draw those connections and quantify each aspect of your analysis, but keep your analysis simple and visual. Keeping it all on one page will allow you and others to see how all the parts tie together. Provide additional information as a separate write-up and attach it on following pages. Of course, as you begin making decisions on what to focus on, you will come up with a more detailed plan, which is great, but the initial analysis should remain simple and understandable by anyone who picks it up.

People usually like showing off their good side, so it is easy to list the strengths, however realistic they actually are. It's more difficult for managers to get honest answers from their employees about real weaknesses and threats, so it is important to create a safe place when brainstorming the more negative aspects. Here's where having done your relationship building with your team will allow there to be enough trust to do this legitimately. You might have to use a technology solution to allow team members to submit weaknesses and threats anonymously if you don't have the trust to do so face to face. Being aware of and honest about your challenges can show as much or more strength as listing out what your strengths are.

Tuesday, September 30, 2014

Introvert's Dream


A colleague of mine was invited to a little virtual coffee break get-together for others who had been hired around the same time as him. There were several conversation starters sent out beforehand. I wasn't invited, so I don't know how much they stuck to the script or talked about other things.

But one of the questions spoke to me:
You are stuck on a desert island and you can only bring one song, one movie and one type of food. What would you bring?
Keep your song and movie if I can have pizza from Sacco's and a promise that you're not pulling my leg about the desert island thing.


If I really do get a song, too, it would be one of Clapton's hour-long jams.

Monday, August 4, 2014

Simplification

As IT is integrated into more and more aspects of our lives at work, home, and everywhere in between, the need to make all the varying systems around us work together seamlessly leads to increased complexity. But more complexity means more cost and more likelihood of downtime. The article linked below discusses the importance of keeping it simple and provides some basic principles to keep in mind to make your organization more flexible and keeping it simple at the same time. The points in their simplification roadmap are to start at the top, use an entrepreneurial approach, use cloud services when available, and be agile. By having buy-in at all levels and focusing on adaptability, you can focus on the unique value you add rather than wasting time running around trying to reinvent the wheel or maintain the status quo.

http://www.cio.com/article/2451671/it-strategy/simplifying-it-pays-off-with-big-savings-better-business-success.html

Thursday, July 17, 2014

Technology Rights

A recent court case in Europe has highlighted a right that many would not immediately list in the top rights most important to them - the right to be forgotten. Privacy expectations in Europe are different from the United States, as Google found out as it took pictures all over Germany for its popular Street View service. But what about the right to have links removed which refer to old newspaper articles about something that happened a decade or two ago? It happened. There was a newspaper article about it. It's public information. Things change over time, and it's old news, but should the original articles still be searchable? Technology is an enabler. It helps you do what you want bigger and faster than you could without it. But that doesn't mean you can always control it. The man suing for removal of a past legal issue now shows up in more search results than he did before, magnifying the discussion around him. So how do you effectively leverage technology to magnify the positive and manage the negative without it getting out of control on you? That's the tough question to ask in your organization.

More on the Right to be Forgotten:

http://www.legalweek.com/legal-week/blog-post/2346341/the-right-to-be-forgotten-case-google-right-this-time-ecj-hopelessly-wrong

http://www.computerworld.com/s/article/9249793/Microsoft_offers_European_Bing_users_the_right_to_be_forgotten_

Tuesday, July 8, 2014

Managing the Critical Path

When planning a project, the temptation is always there to build in extra time everywhere so that your schedule never slips. Just like if you're putting in tile or carpet, you order 10% more than what you measure that you need in case something gets damaged or if you mis-measured. Time is the biggest resource you have on a project, and the most visible "failure" you can have is missing your launch date. So it makes sense that you would add 10% or some other fudge factor to all your estimates, right? Not so fast. If you have a time-sensitive launch, set the completion date well enough before you really need it, but don't just give everyone extra time to get everything done.

The critical path is the sequence of tasks that need to be completed on time for the project to complete on time. If you have slack built in between tasks early on in your project, then what you've done is made it so those tasks can be delayed without changing the completion date. By definition, if tasks can be delayed and not affect the project completion date, they are not critical. If you do something like this, you'll end up with a very short critical path, with just the last task or two showing in red, meaning the last couple have to be completed on time. That makes sense if you look at it logically. It may be logical and possible, but is it allowed? I'm not sure I can answer that question or even if I can that I want to. The better question than whether it's allowed is whether it's a good idea. And that, I can say emphatically, is not a good idea.

Anyone looking at your Gantt chart or network diagram will expect a critical path. There are many ways you can show that, and there are many possible ways to put together a project. You can have a completely sequential project, in which case only one task is being worked on at a time and everything is on the critical path. It's neither logical nor desired, except in the rarest of circumstances, to have every task be on the critical path. On the opposite side of the pendulum, it is neither logical nor desired to have only a couple tasks or even no tasks on the critical path. At its most basic level, the critical path is really just a calculation. It is what it is. You simply measure the lengths of the various paths and the longest one is critical. At a more strategic level, the critical path is key to your management of the project, as it is the series of tasks that you will be watching most closely for scheduling issues. If everything is critical, or if nothing is, then you have nowhere to focus your attention, and the project just kind of does whatever it wants. You can probably see how that might be a bad thing.

Building in slack between tasks, aka giving people extra time to finish their tasks, is not meaningful or helpful and if anything is damaging, because if you give them extra time, they will take it. If you give someone a 1 week task but give them 2 weeks to do it in, they will wait until the second week to start. The idea is probably so if they end up taking 6 days instead of 5, the schedule doesn't change. That's good in theory, but if they have a 1 week period to do their work and start week 1 and go one day over, they are just one day late. If they have two weeks and start week 2 and go one day over, they are now 6 days late. Even if they have 2 weeks and start at the beginning and finish in 5 days, there is a phantom 5 days that everyone else is going to be sitting around waiting. Why tell the next team they can't start work for 5 days when the previous work is done, just to maintain the schedule? If there are things that have to happen on a certain date, well you hard code those and work around it. But those are pretty rare. If you want to build in some slack, put that at the end. If management wants things done by the end of the year, you plan the project to complete, say, October 31. But the project due date is published as October 31. You don't tell everyone that the goal is Halloween but you don't care if it's not done until Christmas. Stick to Halloween. If it does go over by a week, we'll all survive. But the second you start telling people your "real" go-live date, that's the date everyone will be aiming for and before you know it, New Years' comes and goes and everyone is still trying to wrap up loose ends that should have been done 2 months prior.

Monday, June 30, 2014

Self Plagiarism

Citations can be a messy thing. They're actually simpler than most people think, but everyone likes to make them messy. Citing your own work, of course, does add a layer of complexity.

At its simplest, using someone else's idea means you need to cite them. There are two reasons for this. One is that you should give credit if your idea actually isn't yours. It's only right. Even if you put it in different words, it's still their idea. Second is that you should give credit to actually lend some credence to what you're saying. That is the part most people don't realize. Often we're taught that we need to be creative and think of things completely on our own, but since when are either you or me the world's expert on any given topic? Better to apply what the experts are saying than to just make something up yourself. It's not weak to use someone else's idea, but it actually makes your argument better.

That said, related to self-citation, there are two principles at play. One is copyright and the idea of giving someone credit for an idea you're using. Obviously if you write something, you own the copyright to what you create, and you can copy what you wrote verbatim or put in different words as much as you like, since it's your copyright. Beyond copyright, however, the idea of self-plagiarism comes into play, as a particular ethical issue of higher education that is not really applicable elsewhere. Generally speaking, professors don't like you using something you wrote for another class in their class, without permission. Sometimes this varies based on the professor, and other times it is an institutional policy. Where I currently teach, they don't have a policy against this, because as part of the competency based model, it's not likely that something a student writes for one class will work in another class without major revisions. If you have published something, it's yours, so do what you want with it. If you think citing yourself will lend additional credence to what you say since it's been published, then use it to make your presentation stronger.

Many schools use a service like TurnItIn, however, to check if something a student submits was submitted to another class or found on the Internet somewhere. So it all comes down to execution, where the rubber hits the road. If you can copy something you wrote elsewhere but the computer dings you for it, you'll have to deal with it and explain what you did, even if it was perfectly okay to do so. If you make it clear up front what is going on and cite everything, then it doesn't look as much like you're trying to hide something.

Wednesday, May 7, 2014

War on General-Purpose Computing

We hear a lot about security. We hear much about copyright. Not often do we think or hear about the connections between the two. Copyright- and internet-reform activist and science fiction author Cory Doctorow discusses just how these two come together in what he calls the War on General-Purpose Computing. The idea is that general purpose computers, such as your laptop or the servers locked away in the company data center, are designed to do exactly what we tell them. Because they can do anything, it's important that their owners/users know what is running on them. Rogue processes need to be found and removed to keep legitimate programs and data secure.

Being able to control everything on the computer means if it's displaying copyrighted content, you can (technologically, if not legally) make and distribute copies of that content. Content publishers claim this causes them to lose money, so they push for laws and technology that don't allow users to control everything on their previously general-purpose computer. Since owners/users can't even tell everything that is running, let alone actually control everything their computer is doing, security gives way as someone else is controlling their computer. Someone else is controlling your computer.

Friday, April 25, 2014

Make It Easy

In a New York Times technology advice column, a grandparent asks for advice on getting grandchild videos which are recorded in portrait mode to show properly since the media player they use plays it back in landscape mode. Various software options are discussed for accomplishing the task, but there is a glaring hole in this advice. The video shouldn't be recorded in portrait mode to begin with. Video is always more natural in landscape mode, so they should ask their son to rotate his phone when recording videos to begin with, but again there is a glaring hole in this advice. The fact is that it is more natural to hold phones vertically. So if it's easier to hold phones up and down but video is more natural to view in widescreen, what's the solution? The solution is for hardware and software vendors to create their cameras so they record in landscape mode even when the phone is held vertically. It would be very simple to do and reduce many of the poor quality videos that are recorded. You may not work somewhere that makes hardware or software for smartphones. But wherever you do work, there's probably a similar issue you could solve just by paying attention to the user experience and making it easier for people to use their technology better. If there's something you want people to do, the solution is simple: make it easy.

 

Monday, March 31, 2014

The Statistics of a Degree

This video posits that the school system somehow robs students and that they will be better off if they don't get a degree. Instead they should educate themselves on the street or in their garage. The performer (yes, he's performing to get a YouTube paycheck by millions of us watching his video and associated advertisements) asks the watcher to look at the statistics, and then proceeds to list off a dozen predictable outliers who were wildly successful without graduating from college.


Let's actually look at the statistics, shall we?


Maybe you're special and will be the next outlier. Maybe our schools could do things more efficiently (okay, not maybe; they do need an overhaul). Maybe you'll be more likely to have a higher paying job if you get a degree.