Tuesday, December 16, 2014

Information Systems Success Models - An Annotated Bibliography

DeLone,W.H. & McLean, E.R. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1).

IS success is multidimensional and interdependent, so interactions between success dimensions need to be isolated. Success dimensions should be based on goals of the research as well as proven measures where available. The number of factors should be minimized. The key factors included in the model include system quality, information quality, system use, user satisfaction, individual impacts, and organizational impact.

Rai, A., Lang, S.S., & Welker, R.B. (2002). Assessing the validity of IS success models: An empirical test and theoretical analysis. Information Systems Research, 13(1).

IS success models are compared. One major factor that differs among models is the category of IS Use. Some models include Use as a process since it is a prerequisite to other factors, others an indicator of success since people won’t use a system if they haven’t determined it will be useful to them, and of course perceived usefulness vs. measured use. The Technology Acceptance Model suggests that perceived usefulness and ease of use directly impact user behavior and system use.

Seddon, P.B. (1997). A respecification and extension of DeLone and McLean’s model of IS success. Information Systems Research, 8(September).

Standard variance models assert that variance in independent variables predicts variance in dependent variables. Process models, on the other hand, posit that not only are the occurrence of events necessary but that it is a particular sequence of events that leads to a change in the dependent variable. The presented IS success model removes the process component of the DeLone and McLean’s model. The problematic model contained three meanings of information system use. One meaning is that use provides some benefit to the user. A second, invalid, meaning presented use as a dependent variable of future use (i.e., if the user believes the system will be useful in the future, they will use it now). The third, also invalid, is that use is an event in the process that leads to individual or organizational impact. The proposed model links measures of system and information quality to perceived usefulness and user satisfaction, which in turn leads to expectations of future system usefulness and then use. Observing benefits to other individuals, organizations, and society also impact perceived usefulness and user satisfaction regardless of system or information quality.

Velasquez, N.F., Sabherwal, R., & Durcikova, A. (2011). Adoption of an electronic knowledge repository: A feature-based approach. Presented at 44th Hawaii International Conference on System Sciences, 4-7 January 2011, Kauai, HI.

This article discusses the types of use for knowledge base users. It utilized a cluster analysis to come up with three types of users. This included Enthusiastic Knowledge Seekers, Thoughtful Knowledge Providers, and Reluctant Non-adopters. Enthusiastic Knowledge Seekers made up the largest group at 70%. They had less knowledge and experience and shared little if anything of their own but considered the knowledgebase articles to be of high quality and very useful. The thoughtful knowledge providers, 19% of the users, submitted quality articles to the knowledgebase, enjoy sharing their knowledge with others, had moderate experience, and were intrinsically motivated. The smallest group, Reluctant Non-adopters at 11%, were experts who were highly experienced and adept at knowledge sharing but lacked the time or intrinsic motivation to do contribute meaningfully. They considered the knowledgebase to be low quality and did not consider it worth their time to work on improving it.

Thursday, December 4, 2014

Cluster Analysis and Special Probability Distributions - An Annotated Bibliography

Antonenko, P., Toy, S., & Niederhauser, D. (2012). Using cluster analysis for data mining in educational technology research. Educational Technology Research and Development, 60(3), 383-398.

Server log data from online learning environments can be analyzed to examine student behaviors, in terms of pages visited, length of time on a page, order of links clicked, and so on. This analysis is less cognitively taxing to the student than think aloud techniques and to the researcher since there is no coding of behaviors involved. Cluster analysis groups cases such that they are very similar within the cluster and dissimilar to other cases outside the cluster across target variables. It is related to factor analysis, where regression models are created based on a set of variables across cases, but in cluster analysis, cases are then grouped. Proximity indices (squared Euclidean distance or sum of the squared differences across variables) are calculated for every pair of cases. Squaring makes them all positive and accentuates the outliers. Various clustering algorithms are available to then group similar cases. Ward’s is a hierarchical clustering technique that combines cases one at a time from n clusters to 1 cluster and determines which minimizes the standard error, and is used when there is no preconceived idea about the likely number of clusters. Using k-means clustering, a non-hierarchical techniques, an empirical rationale for a predetermined number of clusters is tested. It may also be used when there is a large sample size in order to increase efficiency; if no empirical basis exists, the model is run on 3, 4, and 5 clusters. The method calculates k centroids and associates cases with the closest centroid, repeating until the standard error is minimized by allowing cases to move to a different centroid. It may also be possible to use two different kinds of techniques, for example, a Ward’s cluster analysis on a small sample followed by a k-means cluster analysis based on the findings from Ward’s. After determining the clusters, the characteristics of each cluster should be compared to ensure there is a meaningful difference among them and that there is a meaningful difference in the outcome based on their behaviors, since cluster analysis can find structures in data where none exists. ANOVA may then be used to determine for each cluster how much each variable contributes to variation in the dependent variable. It may be useful to use more than one technique and compare or average them, as different techniques may result in a variation in the results.

Bain, L.J. & Englehardt, M. (1991). Special probability distributions. In Introduction to probability and mathematical statistics (2nd Edition). Belmont, CA: Duxberry Press.

A Bernoulli trial has two discrete outcomes whose probabilities add up to 1. A series of independent Bernoulli trials forms a Binomial distribution, where the number of successes (or failures) are determined for n trials. A Hypergeometric distribution occurs when n samples are taken from a population of N+M without replacement. It can be useful for testing a batch of manufactured products for defects in order to accept or reject the batch. The Geometric Binomial distribution determines the minimum number of Bernoulli trials that must occur to achieve a success. The Negative Binomial distribution determines the minimum number of Bernoulli trials that must occur to achieve n successes. The Poisson distribution describes the probability of n independent successes occurring over a certain number of trials. The discrete uniform distribution allows for n possible values, each with equal probability of occurrence.

Blau, B.M., Brough, T.J., & Thomas, D.W. (2013). Corporate lobbying, political connections, and the bailout of banks. Unpublished manuscript, Department of Finance and Economics, Utah State University, Logan, UT.

When measuring a dependent variable with discrete values, an appropriate count regression framework must be used. Poisson, negative binomial, and OLS are possible models to use. Poisson regression uses a distribution where the mean is equal to its variance. If the distribution is over-dispersed or significantly greater than 0, Poisson will not work. No discussion of when negative binomial or OLS work.

Collins, L.M. & Lanza, S.T. (2010). Latent class and latent transition analysis for the social, behavioral, and health sciences. New York: Wiley. Latent variables are unobserved but predicted by the observation of multiple observed variables. The latent variable is presumed to cause the observed indicator variables. Different models are used, depending on whether the observed and latent variables are discrete or continuous. Using a discrete latent variable helps organize complex arrays of categorical data. A given construct may be measured using either continuous or discrete variables, so the method used when there is a choice should be based on which best helps address the research questions. When cases are placed into classes, the classes are named by the researcher based on their similar characteristics.

Fisher, W.D. (1958). On grouping for maximum homogeneity. Journal of the American Statistical Association, 53, 789-798.

Grouping or clustering is a useful tool for distinguishing sets of cases based either on prior theory of what the groups should entail or with no initial structure in mind. Combining the groups has a goal of minimizing the variance or error sum of squares. For some small cases, a visual inspection of data may allow the researcher to come up with the clusters. In large data sets with evenly dispersed data, this is difficult or impossible.

Francis, B. (2010). Latent class analysis methods and software. Presented at 4th Economic and Social Research Council Research Methods Festival, 5 - 8 July 2010, St. Catherine’s College, Oxford, UK.

Latent class cluster analysis assigns cases to groups based on statistical likelihood; they do not have to be assigned to discrete classes. K-means clustering is problematic, since the number of groups has to be specified a priori, cases are assigned to unique clusters, and only allows continuous data.

Gardner, W., Mulvey, E.P., & Shaw, E.C. (1995). Regression analyses of counts and rates: Poisson, overdispersed Poisson, and negative binomial models. Psychological Bulletin 118(3).

Researchers often use suboptimal strategies when analyzing count data, such as artificially breaking down counts into categories of 5 or 10, but this loses data and statistical power. Another ineffective strategy is to use regular linear regression or OLS. Using OLS, illogical values, such as negatives will be predicted, and the model’s variance of values around the mean is not likely to fit well. Another problem with OLS is heteroscedastic error terms, where larger values will have larger variances and smaller values small variances. Nonlinear models that allow for only positive values and describe likely dispersion about the mean must be used. Poisson places restrictive assumptions on the size of the variance. The Overdispersed Poisson model corrects for the large variances that are common. The negative binomial is another option. In the regular Poisson model, truncated extreme tail values could lead to underdispersion and a large number of high values could lead to overdispersion. An overdispersion parameter is calculated by dividing Pearson’s chi-squared by the degrees of freedom and then the overdisperson parameter is multiplied by the mean. The negative binomial model includes a random component that accounts for individual variances. The negative binomial model allows one to estimate the probability distribution, where the overdispersed Poisson does not.

Osgood, D.W. (2000). Poisson-based regression analysis of aggregate crime rates. Journal of Quantitative Criminology 16(1).

The normal approach to analyze per capita rates of occurrence is to use the OLS model. However, OLS does not provide an effective model when recording a small number of events. For large populations, OLS may work, but for a small number of events in a small population, the results is an overestimated rate of occurrence. Often small counts will be skewed with a floor of 0. The Poisson model corrects for many of these issues with OLS; however, the unlikely assumption of the Poisson’s mean equaling the variance must hold. Due to individual variations and correlation between observed values and variance, overdispersion is common. Adjusting the standard errors and thus t-test results for the overdispersion helps correct the model. The negative binomial model combines the Poisson distribution with a gamma distribution that accounts for unexplained variation.

Romesburg, H.C. (1990). Cluster Analysis for Researchers. Malabar, FL: Robert E. Krieger Publishing Company.

The steps in doing cluster analysis begin with creating the data matrix, including objects and their attributes. The objective is to determine which objects are most similar based on those attributes. An optional step is to standardize the data matrix. A resemblance matrix is then calculated, showing for each pair of objects a similarity coefficient, such as the Euclidean distance. Based on the similarity coefficient, a tree is created by combining similar objects and comparing their average to the other existing objects. Then rearrange objects in the data matrix to show the closest objects next to each other.

Velasquez, N.F., Sabherwal, R., & Durcikova, A. (2011). Adoption of an electronic knowledge repository: A feature-based approach. Presented at 44th Hawaii International Conference on System Sciences, 4-7 January 2011, Kauai, HI.

This article discusses the types of use for knowledge base users. It utilizes a cluster analysis to come up with three types of users. Clustering methods compared were Ward’s, between-groups linkage, within-groups linkage, centroid clustering, and median clustering and the one with the best fit was used.

Wang, W. & Famoye, F. (1997). Modeling household fertility decisions with generalized Poisson regression. Journal of Population Economics 10. Poisson and negative binomial models account for non-negative counts of discrete occurences. The Poisson model requires that the mean and variance of the dependent variable are equal, which is rarely true. This leads to a consistent model but invalid standard errors. The negative binomial model handles counts with overdispersion. When underdispersion is present, a generalized Poisson regression model may be used. Generalized Poisson handles both overdispersion and underdispersion.

Ward, J. H., Jr. (1963), Hierarchical Grouping to Optimize an Objective Function, Journal of the American Statistical Association, 48, 236–244.

Ward describes a clustering technique that allows for grouping with respect to many variables in such a way that minimizes the loss in each group. Traditional statistics would take a group of numbers, find the mean, and then calculate the error sum of squares for all cases and the one mean. By grouping, the ESS will be minimized as they are compared to the group means. The appropriate number of groups can be determined in the grouping process rather than needing to specify it in advance.

Friday, November 21, 2014

Instructional Design - An Annotated Bibliography

Anderson, L.W. & Krathwohl, D.R. (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.

Instructional objectives are the exposure to terminology and scaffolded practice that build towards educational objectives. Educational objectives are the mid-level competency, which is the level at which assessment should culminate. Global objectives are the connection to the workplace – what a professional in the field should be able to do, or at a minimum, the big picture goal of a degree program. It may be described in the other direction as a cognitive mapping process where global objectives are deconstructed into educational objectives and educational objectives into instructional objectives utilizing a template representing the six levels of the taxonomy: Remember, Understand, Apply, Analyze, Evaluate, and Create. Objectives are what a student should be able to do but not a specific instructional activity or assessment activity.

Dupin-Bryant, P.A. & DuCharme-Hansen, B.A. (2005). Assessing Student Needs in Web-Based Distance Education. International Journal of Instructional Technology & Distance Learning 2(1).

Student needs assessment helps the instructor plan to facilitate a course learning experience. Learning objectives may or may not already be in place when the needs assessment is carried out, but the needs assessment will help refine the instructional objectives that need to be included to determine where to start. Areas to assess student needs include: computer skills, learning styles, available resources, desired outcomes, and prior experience. Computer literacy may be taught to the entire class or just the group that needs it or integrated into other learning activities. There is a larger debate on the usefulness of learning styles inventories, but the important concept is to ensure a variety of types of content and activities are provided. Available resources are probably more important in web-based education but important in any environment – do students have the hardware, software, internet access, or otherwise to be able to participate fully in all class activities? Course objectives are one thing, but students may be looking to get something else out of the class. Always build on what the students have previous learned, whether from previous classes or from experience in the workplace.

Fisher, D. H. (2012). Warming up to MOOCs. The Chronicle of Higher Education. Retrieved March 19, 2013 from http://chronicle.com/blogs/profhacker/warming-up-to-moocs.

Hesitation to use materials available from other instructors in one’s own classroom may be due to insecurity around what others will think about using outsourced lectures and what to do with class time instead of lecturing. The author used the MOOC content as homework assignments, flipping the classroom to then allow for higher level discussions of the material, instead of just presentation of the material. By utilizing materials of other faculty and contributing back, the community of scholarship extends from the research component of the faculty role to include teaching, which is often ignored.

Fusch, D. (2012). Course materials for mobile devices: Key considerations. Higher Ed Impact. Retrieved March 19, 2012, from Academic Impressions http://www.academicimpressions.com/news/course-materials-mobile-devices-key-considerations.

People spend as much time reading on a digital screen as they do reading paper. The amount of content read through mobile devices will soon surpass what is read on a full size computer. Faculty need to consider the usability and accessibility of the learning resources they assign to ensure they can be effectively used on mobile devices. By assuming it will be accessed on a mobile device first, it’s easier to move from mobile to desktop than the other way around. Record short videos, don’t use PDFs, and break up the readings into smaller chunks. Copyright and licensing considerations are important, as different licensing may apply in the mobile realm. Using open content is one way to ensure it can be ported to other platforms.

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.

One school of thought is that students learn by discovering or constructing concepts themselves, while another school says that direct explanation of concepts and related metacognition is the most effective. The assumptions at play in the minimal guidance approach are that experiential, self-constructed learning is best. The authors put forward that an understanding of human cognitive architecture is needed to determine the most effective instructional methods. They explain that long term memory permits experts to pull from vast experience to recognize a situation and know what to do. Given the limitations of working memory, the authors explain that minimally guided instruction taxes the working memory while pushing little into long term memory. They quote Shulman’s explanation of the difference between content knowledge and pedagogical content knowledge and curricular knowledge (metacognition) as well as how one works in the field (epistemology). Studying worked examples reduces cognitive load for novice learners that are prepared appropriately for them. PBL research shows little or no gain in problem solving ability.

Reed, S. K. (2006). Cognitive architectures for multimedia learning. Educational Psychologist, 41(2), 87-98.

Six theories of multimedia learning are reviewed, the first three being memory studies and the last three in instructional contexts. Paivio’s Dual Coding Theory: visual imagery is an important method of coding concepts; dual coding refers to the use of both verbal coding and visual coding of semantic meaning. Baddeley’s Working Memory Model: verbal and visual coding, with a verbal focus on phonological learning; includes the need for a “central executive” that the learner uses to guide what modality to use in the moment; author later adds to the model the episodic buffer, where information from various modalities can be combined. Engelkamp’s Multimodal Theory: acting out what is being learned results in greater recall, as action implies understanding, assuming the action is relevant to the semantics. Sweller’s Cognitive Load Theory: Multimedia design may decrease extraneous cognitive load by integrating information that needs to be presented together, worked examples, and schemas; split-attention and redundancy effects are described. Mayer’s Multimedia Theory: utilizes recommendations from other models; seven principles – multimedia (learn better from pictures and words together), spatial contiguity (corresponding words and pictures should be close to each other), temporal contiguity (words and pictures should be presented simultaneously), coherence (extraneous words, pictures, and sounds should be excluded), modality (animation + narration > animation + text), redundancy (animation + narration > animation + narration + text), and individual differences (low-knowledge learners and high-spatial learners are more affected by multimedia presentation). Nathan’s ANIMATE Theory: visual representation through simulation to help model the solution to a problem.

Renninger, K. A. (2009). Interest and identity development in instruction: An inductive model. Educational Psychologist, 44(2), 105-118.

Instructional model that states both the interest and identity of a student are important to consider in developing learning activities. Interest relates to the desire of a learner to reengage with particular content after a previous experience. Identity is the learner’s self-representation as someone who engages with particular content. Interest needs to be cultivated and sustained throughout all stages of individual development. Interest requires some understanding of what it takes to engage, not just a baseless sense of euphoria around an interesting topic. It’s also important to consider interest in achievement, as opposed to interest in the actual content. Identity changes as learners mature and come to understand how much work is required to accomplish required education levels and goals.

Shute, V., Towle, B. (2003) Adaptive E-Learning. Educational Psychologist 38(2).

Early iterations of e-learning were concerned with simply getting information online, but the focus now is on improving learning and performance. In order to be the most effective for each individual learner, the characteristics of each learner should be assessed. One behavior a system should encourage is exploration, as students who explore and participate in optional material tend to perform better on assessments. The learner model represents what the individual learner knows, and the instructional model presents and assesses content in the most appropriate way. Adaptive e-learning should provide, not the same textbook in a scrolling page instead of a physical page, but rather dynamically ordered and filtered pages to present learners just what they need right when they need it.

Smith, L. (2003). Software Design. In Guidelines for Successful Acquisition and Management of Software Intensive Systems (4th ed.). Hill Air Force Base, UT: U.S. Air Force Software Technology Support Center.

Given the complex nature of programming, design is the key phase between gathering requirements and actual development. Design includes several iterations, including functional design (logic, desired outputs, rules, data organization, and user interface), system design (system specifications, software structure, security, and programming standards), and program design (software units, test plans, user documentation, install plan, training plan, and programmer manual). Design methods include structured design (functions and subroutines with an order), object oriented design (objects inherit from parents, changes can be pushed out to many related objects, and specifics about what happen in each object are not as important), and extreme programming (frequent code review, testing, and release iterations).

Stiggins, R. & DuFour, R. (2009). Maximizing the Power of Formative Assessments. Phi Delta Kappan 90(9). Retrieved January 31, 2013 from http://alaskacc.org/sites/alaskacc.org/files/Stiggins%20article.pdf.

Formative assessment helps to track individual student achievement and performance and drive continuous improvement. Common assessments are created by multiple faculty members teaching the same course. Common formative assessments can result in significant improvement of learning if they are specifically integrated into instructional decision making, high quality, and used to benefit student learning. Assessments may be at the classroom, school, or institutional level. No matter the level or how they are used, assessments need clear learning targets that are appropriately scaffolded and achievable, based on established standards, high quality and high fidelity, and are available in a timely and understandable form in order to help the learner do better next time. Common formative assessments can be used to determine how an individual student is doing as well as to compare classroom performance. The act of putting together a common assessment allows the conversation to happen regarding what is truly important to measure. The greater dialogue among faculty members results in a higher quality assessment than what any individual teacher might be able to create.

van Merriƫnboer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complex learning: Recent developments and future directions. Educational Psychology Review, 17(2), 147-177.

When introduced, cognitive load theory led to new types of instructional methods, such as providing many worked examples instead of problems to solve. The theory posited that long term memory is made up of schemas, which make meaning out of complex structures. Working memory is limited when dealing with new content but not limited when working with schemas from long term memory. Cognitive load theory deals with the processing of content in working memory to create schemas stored in long term memory. In order to take a dynamic approach, where instruction is automatically tailored to the learner, the knowledge of learners must be assessed and methods of promoting effective instruction for each group of learners are needed. Assessment should include ability to generate correct responses as well as the mental effort required to accomplish that.

White, B., Frederiksen, J. (2005). A theoretical framework and approach for fostering metacognitive development. Educational Psychologist, 40(4).

Metacognition is crucial in helping individuals learn through inquiry and work together in teams. Understanding how to use inquiry learning will help the learner be more effective in using inquiry learning strategies. Inquiry then includes inquiry about inquiry and inquiry about the domain of study. Advisors to help manage metacognition may be automated tutors or other learners.

Wiggins, G. & McTighe, J. (2006). Backward Design. In Understanding by Design (Expanded 2nd Edition). Upper Saddle Hill, NJ: Pearson.

When planning curriculum, it is important to pay attention to local and national standards but also to determining the specific understandings that are desired. The focus should be on learning rather than on teaching. By determining a larger purpose first, the best resources and activities can be selected to achieve the goal. Traditional design falls into the trap of providing interesting experiences that do not lead to specific desired achievements or that briefly cover all content without touching on any with enough depth to be meaningful. Start with desired goals and ask what appropriate evidence of achievement would look like and likewise what the assessment should consist of. Only after determining what the desired results are and what an appropriate assessment would look like can learning experiences be planned. Some sacred cows may be harmed in the process of ensuring all activities have a specific purpose with effectiveness in reaching our targets in mind. The textbook may become less important than its significant role in many classrooms.

Friday, October 31, 2014


Let's take a look at this importance-urgency matrix from Dr. Steven Covey. He talks about the need to prioritize your activities in order to manage your time effectively. You can actually keep a log of your activities throughout the day and categorize them in terms of how urgent and important they are, and you may be surprised at where you spend most of your time. People often claim they don't have time to do certain important items like planning and building important relationships, because they are always just putting out fires. It's the important/urgent items that demand our attention immediately. In between all the fires, we have all sorts of other small activities that fill in the rest of our time, but these are often unimportant items that are either forced on us by others or personal preferences and obsessions.

The trick is to prioritize properly. By focusing attention on the important but not urgent items, such as strategic planning and building key relationships in accordance with your strategy, the fires will actually put themselves out. If you have a good relationship with a customer, they'll understand when one order doesn't come through right, so while you need to fix it, it's not really as much of a fire as if you had to be worried about losing the account altogether. On the other hand, you might have a customer that doesn't fit your target demographic, who causes problems, and who you don't make much money on anyway. If you can make the strategic decision to drop that customer on whose fires you're wasting a lot of time and energy, you may come out ahead, because you can focus that attention on opportunities that will provide a better return on investment. In order to have the time to focus on strategic activities, you have to eliminate the unimportant activities that don't serve a greater purpose. Eliminate or shorten some meetings; set a schedule to check email once every few hours instead of letting it distract you as it comes in; stop creating reports that you think others need but they don't actually even look at. For an IT department, focusing on strategic aspects of the system infrastructure will help ensure projects are rolled out in a way that makes sense to support the company and possibly even utilize technology to drive new business opportunities.

The SWOT Analysis is an example of a Quadrant II activity that helps you understand where you should be focusing your time in order to be the most effective. A SWOT Analysis doesn't need to be overly structured or complicated. Spending a lot of time building a pretty SWOT template and training everyone on its use would be a good example of an unimportant activity. Put it out there and let it happen, whether you use a 2x2 matrix, bulleted lists, or more of a free-form mind map. The strengths and weaknesses are inward facing. They refer to inherent qualities of the company or department and what they're currently doing. Opportunities and threats are outward facing. They are qualities of the environment, actions of competitors, or imminent events that will have an effect on you. The goal is to build on strengths and take advantage of opportunities, while eliminating weaknesses and preventing threats from knocking you down.

In order to get a handle on where to focus attention, after brainstorming, it's important to group and rank the items you have listed. Provide additional details to determine the size of the threat or the amount of money an opportunity may be worth to you. Often there are connections between the internal and external analysis. You can leverage your strengths to take advantage of opportunities and avoid threats. Overcoming a weakness may open up new opportunities. So draw those connections and quantify each aspect of your analysis, but keep your analysis simple and visual. Keeping it all on one page will allow you and others to see how all the parts tie together. Provide additional information as a separate write-up and attach it on following pages. Of course, as you begin making decisions on what to focus on, you will come up with a more detailed plan, which is great, but the initial analysis should remain simple and understandable by anyone who picks it up.

People usually like showing off their good side, so it is easy to list the strengths, however realistic they actually are. It's more difficult for managers to get honest answers from their employees about real weaknesses and threats, so it is important to create a safe place when brainstorming the more negative aspects. Here's where having done your relationship building with your team will allow there to be enough trust to do this legitimately. You might have to use a technology solution to allow team members to submit weaknesses and threats anonymously if you don't have the trust to do so face to face. Being aware of and honest about your challenges can show as much or more strength as listing out what your strengths are.

Tuesday, September 30, 2014

Introvert's Dream

A colleague of mine was invited to a little virtual coffee break get-together for others who had been hired around the same time as him. There were several conversation starters sent out beforehand. I wasn't invited, so I don't know how much they stuck to the script or talked about other things.

But one of the questions spoke to me:
You are stuck on a desert island and you can only bring one song, one movie and one type of food. What would you bring?
Keep your song and movie if I can have pizza from Sacco's and a promise that you're not pulling my leg about the desert island thing.

If I really do get a song, too, it would be one of Clapton's hour-long jams.

Monday, August 4, 2014


As IT is integrated into more and more aspects of our lives at work, home, and everywhere in between, the need to make all the varying systems around us work together seamlessly leads to increased complexity. But more complexity means more cost and more likelihood of downtime. The article linked below discusses the importance of keeping it simple and provides some basic principles to keep in mind to make your organization more flexible and keeping it simple at the same time. The points in their simplification roadmap are to start at the top, use an entrepreneurial approach, use cloud services when available, and be agile. By having buy-in at all levels and focusing on adaptability, you can focus on the unique value you add rather than wasting time running around trying to reinvent the wheel or maintain the status quo.


Thursday, July 17, 2014

Technology Rights

A recent court case in Europe has highlighted a right that many would not immediately list in the top rights most important to them - the right to be forgotten. Privacy expectations in Europe are different from the United States, as Google found out as it took pictures all over Germany for its popular Street View service. But what about the right to have links removed which refer to old newspaper articles about something that happened a decade or two ago? It happened. There was a newspaper article about it. It's public information. Things change over time, and it's old news, but should the original articles still be searchable? Technology is an enabler. It helps you do what you want bigger and faster than you could without it. But that doesn't mean you can always control it. The man suing for removal of a past legal issue now shows up in more search results than he did before, magnifying the discussion around him. So how do you effectively leverage technology to magnify the positive and manage the negative without it getting out of control on you? That's the tough question to ask in your organization.

More on the Right to be Forgotten:



Tuesday, July 8, 2014

Managing the Critical Path

When planning a project, the temptation is always there to build in extra time everywhere so that your schedule never slips. Just like if you're putting in tile or carpet, you order 10% more than what you measure that you need in case something gets damaged or if you mis-measured. Time is the biggest resource you have on a project, and the most visible "failure" you can have is missing your launch date. So it makes sense that you would add 10% or some other fudge factor to all your estimates, right? Not so fast. If you have a time-sensitive launch, set the completion date well enough before you really need it, but don't just give everyone extra time to get everything done.

The critical path is the sequence of tasks that need to be completed on time for the project to complete on time. If you have slack built in between tasks early on in your project, then what you've done is made it so those tasks can be delayed without changing the completion date. By definition, if tasks can be delayed and not affect the project completion date, they are not critical. If you do something like this, you'll end up with a very short critical path, with just the last task or two showing in red, meaning the last couple have to be completed on time. That makes sense if you look at it logically. It may be logical and possible, but is it allowed? I'm not sure I can answer that question or even if I can that I want to. The better question than whether it's allowed is whether it's a good idea. And that, I can say emphatically, is not a good idea.

Anyone looking at your Gantt chart or network diagram will expect a critical path. There are many ways you can show that, and there are many possible ways to put together a project. You can have a completely sequential project, in which case only one task is being worked on at a time and everything is on the critical path. It's neither logical nor desired, except in the rarest of circumstances, to have every task be on the critical path. On the opposite side of the pendulum, it is neither logical nor desired to have only a couple tasks or even no tasks on the critical path. At its most basic level, the critical path is really just a calculation. It is what it is. You simply measure the lengths of the various paths and the longest one is critical. At a more strategic level, the critical path is key to your management of the project, as it is the series of tasks that you will be watching most closely for scheduling issues. If everything is critical, or if nothing is, then you have nowhere to focus your attention, and the project just kind of does whatever it wants. You can probably see how that might be a bad thing.

Building in slack between tasks, aka giving people extra time to finish their tasks, is not meaningful or helpful and if anything is damaging, because if you give them extra time, they will take it. If you give someone a 1 week task but give them 2 weeks to do it in, they will wait until the second week to start. The idea is probably so if they end up taking 6 days instead of 5, the schedule doesn't change. That's good in theory, but if they have a 1 week period to do their work and start week 1 and go one day over, they are just one day late. If they have two weeks and start week 2 and go one day over, they are now 6 days late. Even if they have 2 weeks and start at the beginning and finish in 5 days, there is a phantom 5 days that everyone else is going to be sitting around waiting. Why tell the next team they can't start work for 5 days when the previous work is done, just to maintain the schedule? If there are things that have to happen on a certain date, well you hard code those and work around it. But those are pretty rare. If you want to build in some slack, put that at the end. If management wants things done by the end of the year, you plan the project to complete, say, October 31. But the project due date is published as October 31. You don't tell everyone that the goal is Halloween but you don't care if it's not done until Christmas. Stick to Halloween. If it does go over by a week, we'll all survive. But the second you start telling people your "real" go-live date, that's the date everyone will be aiming for and before you know it, New Years' comes and goes and everyone is still trying to wrap up loose ends that should have been done 2 months prior.

Monday, June 30, 2014

Self Plagiarism

Citations can be a messy thing. They're actually simpler than most people think, but everyone likes to make them messy. Citing your own work, of course, does add a layer of complexity.

At its simplest, using someone else's idea means you need to cite them. There are two reasons for this. One is that you should give credit if your idea actually isn't yours. It's only right. Even if you put it in different words, it's still their idea. Second is that you should give credit to actually lend some credence to what you're saying. That is the part most people don't realize. Often we're taught that we need to be creative and think of things completely on our own, but since when are either you or me the world's expert on any given topic? Better to apply what the experts are saying than to just make something up yourself. It's not weak to use someone else's idea, but it actually makes your argument better.

That said, related to self-citation, there are two principles at play. One is copyright and the idea of giving someone credit for an idea you're using. Obviously if you write something, you own the copyright to what you create, and you can copy what you wrote verbatim or put in different words as much as you like, since it's your copyright. Beyond copyright, however, the idea of self-plagiarism comes into play, as a particular ethical issue of higher education that is not really applicable elsewhere. Generally speaking, professors don't like you using something you wrote for another class in their class, without permission. Sometimes this varies based on the professor, and other times it is an institutional policy. Where I currently teach, they don't have a policy against this, because as part of the competency based model, it's not likely that something a student writes for one class will work in another class without major revisions. If you have published something, it's yours, so do what you want with it. If you think citing yourself will lend additional credence to what you say since it's been published, then use it to make your presentation stronger.

Many schools use a service like TurnItIn, however, to check if something a student submits was submitted to another class or found on the Internet somewhere. So it all comes down to execution, where the rubber hits the road. If you can copy something you wrote elsewhere but the computer dings you for it, you'll have to deal with it and explain what you did, even if it was perfectly okay to do so. If you make it clear up front what is going on and cite everything, then it doesn't look as much like you're trying to hide something.

Wednesday, May 7, 2014

War on General-Purpose Computing

We hear a lot about security. We hear much about copyright. Not often do we think or hear about the connections between the two. Copyright- and internet-reform activist and science fiction author Cory Doctorow discusses just how these two come together in what he calls the War on General-Purpose Computing. The idea is that general purpose computers, such as your laptop or the servers locked away in the company data center, are designed to do exactly what we tell them. Because they can do anything, it's important that their owners/users know what is running on them. Rogue processes need to be found and removed to keep legitimate programs and data secure.

Being able to control everything on the computer means if it's displaying copyrighted content, you can (technologically, if not legally) make and distribute copies of that content. Content publishers claim this causes them to lose money, so they push for laws and technology that don't allow users to control everything on their previously general-purpose computer. Since owners/users can't even tell everything that is running, let alone actually control everything their computer is doing, security gives way as someone else is controlling their computer. Someone else is controlling your computer.

Friday, April 25, 2014

Make It Easy

In a New York Times technology advice column, a grandparent asks for advice on getting grandchild videos which are recorded in portrait mode to show properly since the media player they use plays it back in landscape mode. Various software options are discussed for accomplishing the task, but there is a glaring hole in this advice. The video shouldn't be recorded in portrait mode to begin with. Video is always more natural in landscape mode, so they should ask their son to rotate his phone when recording videos to begin with, but again there is a glaring hole in this advice. The fact is that it is more natural to hold phones vertically. So if it's easier to hold phones up and down but video is more natural to view in widescreen, what's the solution? The solution is for hardware and software vendors to create their cameras so they record in landscape mode even when the phone is held vertically. It would be very simple to do and reduce many of the poor quality videos that are recorded. You may not work somewhere that makes hardware or software for smartphones. But wherever you do work, there's probably a similar issue you could solve just by paying attention to the user experience and making it easier for people to use their technology better. If there's something you want people to do, the solution is simple: make it easy.


Monday, March 31, 2014

The Statistics of a Degree

This video posits that the school system somehow robs students and that they will be better off if they don't get a degree. Instead they should educate themselves on the street or in their garage. The performer (yes, he's performing to get a YouTube paycheck by millions of us watching his video and associated advertisements) asks the watcher to look at the statistics, and then proceeds to list off a dozen predictable outliers who were wildly successful without graduating from college.

Let's actually look at the statistics, shall we?

Maybe you're special and will be the next outlier. Maybe our schools could do things more efficiently (okay, not maybe; they do need an overhaul). Maybe you'll be more likely to have a higher paying job if you get a degree.

Monday, February 17, 2014

Help Seeking - An Annotated Bibliography

Aleven, V., Stahl, E., Schworm, S., Fischer, F., & Wallace, R. (2003). Help seeking and help design in interactive learning environments. Review of Educational Research 73(3).

Help seeking can be seen, not as dependence of the learner, but as self-regulated behavior that helps to develop independent ability. For this to be the case, the help seeking needs to be effective. There are various types of computer-based instruction, including intelligent tutoring systems (AI gives context-sensitive hints), computer assisted instruction (feedback on actions without AI to guide), educational hypermedia (cross-linked information), and problem/based systems (authentic problems with background information and hints about solving the problem). Help seeking model is presented: aware of need for help, decide to seek help, identify helpers, ask for help, and evaluate help. Many studies actually show ineffective use of computer based learning, but that on demand help does tend to help students learn better. Student prior knowledge is a major influence in student performance and success, both in terms of familiarity with the subject and the learning environment. Help seeking ability improves with age due to better ability to monitor one’s own performance. In terms of gender, males are less likely to seek help than females in traditional classroom environments, and while there is less research in computer based learning environments, similar results have been found. A focus on performance rather than learning can lead to avoidance of help seeking.

Anderson, T. (2003). Getting the mix right again: An updated and theoretical rationale for interaction. International Review of Research in Open and Distance Learning 4(2).

Interaction may be defined as only between two people, but here they accept the definition that allows any people or objects to interact with and influence each other, so a student may interact with another student or may interact with content. It’s difficult to know for sure if interactions, as helpful as they may be, actually have educational value. Some students choose programs that minimize the amount of person to person interaction required. A high level of interaction with content, other students, or the teacher may be sufficient, even if the other forms are not present (although student/teacher interaction is perceived as the highest value). Student/content interactions can take the place of many person to person interactions in the right circumstances.

Azevedo, R., Moos, D.C., Greene, J.A., Winters, F.I., & Cromley, J.G. (2008). Why is externally-facilitated regulated learning more effective than self-regulated learning with hypermedia? Educational Technology Research and Development 56(1).

This study compared self-regulated and externally facilitated learning with adolescents studying complex topics. If students lack metacognitive abilities, such as planning, setting goals, activating prior knowledge, and so on, ineffective strategies may lead to less effective use of online resources. The tutor provides individualized scaffolding to each student, that fades (although not completely away) during the course. Tutor-led scaffolding conditions helped students obtain a more complex mental model, as well as more declarative knowledge, and different metacognitive strategies were used by both groups. The study is fairly limited based on age, low prior knowledge, and relatively complex nature of the content.

Butler, R. (1998). Determinants of help seeking: Relations between perceived reasons for classroom help-avoidance and help-seeking behaviors in an experimental context. Journal of Educational Psychology 90(4).

This study dealt with help-avoidance. Students will seek less help for assessments identified as to test competence compared to assessments that are an opportunity to learn. This is because their reluctance is often due to perceptions that learning should be autonomous and that asking for help is evidence of incompetence. This can lead to students who do need help to seek covert help (cheating). Some students may ask for help in solving a problem because they simply want to finish, not necessarily learn anything. Students with ability-focused orientation asked fewer questions than those with autonomous or expedient orientations. Boys with ability-focused orientation cheated more often. One observation that wasn’t a specific purpose of the study was that teachers participating in the study created an environment more conducive to asking questions than is normally found in classrooms. Also limited due to young age of students.

Elen, J., Clarebout, G., Leonard, R., & Lowyck, J. (2007). Student-centred and teacher-centred learning environments: What students think. Teaching in Higher Education 12(1).

Balanced view includes sharing instructional tasks between teacher and student at different points in time. Transactional view is similar, in that student and teacher share responsibilities, but the teacher has the additional responsibility of monitoring and coaching the student through their part. Independent view claims that their roles are fundamentally different. The survey tended to confirm that student-centeredness and teacher-centeredness are not necessarily on the extreme ends of the continuum, so giving more power to students doesn’t necessarily mean the teacher’s job goes away completely. They can actually be mutually reinforcing.

Karabenick, S.A. (2011). Classroom and technology-supported help seeking: The need for converging research paradigms. Learning and Instruction 21(2).

Help seeking is more likely to occur in a context focused on learning and understanding than ones focused on ability or where public disclosure may be embarrassing. Differences exist between research in computer-mediated environments and traditional classrooms. When presenting new information, one study showed preferences for a more structured environment, but given the new methods for the teacher and new content for the students, the study’s results may not be applicable elsewhere. Motivational content may lead to additional help-seeking behaviors. Help-seeking is susceptible to social influence, even when not interacting with another person directly.

Lebak, K. & Tinsley, R. (2010). Can inquiry and reflection be contagious? Science teachers, students, and action research. Journal of Science Teacher Education, 21, 953-970.

Case studies of three science teachers who converted from a teacher-centered approach to an inquiry-based approach for student learning. Whether the teacher was unaware of a need for change, working with special needs students, or limited in the amount of time to conduct experiments, they all found students were more engaged by being hands-on. Peer reflection and feedback (of the teacher’s peers) was important in helping the teachers transform their classrooms. Both the students and teachers evolve dramatically during the conversion to inquiry learning.

Makitalo-Siegl, K. & Fischer, F. (2011). Stretching the limits in help-seeking research: Theoretical, methodological, and technological advances. Learning and Instruction 21(2).

As help seeking is a social behavior, some less socially oriented learners may avoid seeking help, so computer-based resources may reduce barriers that prevent face to face interactions. It is important to look at help seeking behaviors in a variety of environments, tied to various forms of instruction, with various types of resources available. In addition to studying the technology involved, it’s important to look at motivational and emotional dimensions.

Mercier, J. & Frederiksen, C. (2008). The structure of the help-seeking process in collaboratively using a computer coach in problem-based learning. Computers & Education 51(1).

There is little research on help-seeking in an online environment, as it is mostly in a social context like a classroom. Problems students may encounter in learning a new domain include not understanding the solution schema, not understanding the content, and making a mistake in the process. When problems occur, help can overcome the impasse. With a computer tutor, instead of having the expert monitor the student’s progress and needs, the student has to monitor his or her own progress and needs. Phases in the Mercier model: recognize impasse, diagnose impasse, establish specific need for help, find help, read and comprehend help, and evaluate help.

Peterson, S., & Palmer, L. (2011). Technology Confidence, Competence and Problem Solving Strategies: Differences within Online and Face-to-Face Formats. Journal of Distance Education, 25(2).

When students encounter a new problem, they often hesitate to participate because of a lack of confidence; however, research shows that that is the point where they need to engage with others, in order to solve a problem and move to more challenging tasks. Four problem solving strategies include: seeking instructor assistance, seeking peer assistance, further reading, and trial and error, all of which can be effective methods. One study showed that online students felt more comfortable asking for help than traditional students. In this study of university teacher education students, face to face students often waited for instructor assistance, while online students tended to do more trial and error or further reading, and the online students were more competent.

Roll, I., Aleven, V., McLaren, B.M., & Koedinger, K.R. (2011). Improving students’ help-seeking skills using metacognitive feedback in an intelligent tutoring system. Learning and Instruction 21(2).

A tutoring system must be able to detect metacognitive errors and encourage appropriate behavior. Help-seeking advice from the tutoring system can improve such behaviors within other domains of study.

Ryan, A.M., Pintrich, P.R., & Midgley, C. (2001). Avoiding seeking help in the classroom: Who and why? Educational Psychology Review 13(2). The help seeking process starts when students realize there is a problem and then decide to seek help. Students may choose not to seek help because they believe they should not, that no one is competent to help, that it may take too long, or that it highlights one’s incompetence. Highly competent students are more likely to ask for help, because they don’t think others will think poorly of them for it; low achievers are more concerned about what others think.

Weerasinghe, T., Ramberg, R., & Hewagamage, K. (2012). Inquiry-Based Learning With or Without Facilitator Interactions. Journal of Distance Education, 26(2).

Inquiry-based learning promotes higher engagement and construction of knowledge in complex content areas. Teachers have an important role in encouraging participation in a community, but other types of interactions can be effective as well. The inquiry process includes four major phases: triggering event, exploration, integration, and resolution (see Gagne’s 9 Events). The study compared online course discussions with and without a teacher or TA present. They found that the dialogues in both cases students were able to attain high levels of interaction and inquiry and meaningful learning. If anything, when the facilitator was not present, students picked up the slack in terms of additional metacognitive activities.

Wood, D. (2009). Comments on learning with ICT: New perspectives on help seeking and information searching. Computers & Education 53(4).

Digital technologies have allowed for additional research into the area of help seeking, although there is little information so far that has been studied. It does seem clear that self-regulation is important, they need to be encouraged to use resources available to them, and students will be more successful as an independent learner if they paradoxically seek help when needed. While human facilitators are common and natural, it’s possible that automated recommender systems and knowledge bases may be as effective as those technologies become more robust.

Wood, D., Bruner, J.S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry 17(2).

Tutoring, where one member of a group knows how to do something others do not is a common feature of learning, especially with young children. Scaffolding, controlled by an adult, allows the child to solve a problem initially beyond his or her reach. One key that must exist is a recognition and understanding of a solution in order to be able to come up with one’s own solution. Younger children were as adept at recognizing appropriate solutions, although less adept at creating their own solutions. The tutor needs to understand both the task at hand and the characteristics of the tutee.

Friday, January 31, 2014

The State of the Education

This week we had the State of the Union and State of the State addresses. I didn't listen to or watch either. I guess I'm a bit disillusioned with our government right now. Maybe I don't want to listen to our president talk about creating jobs, when he has never had a real job himself. Maybe I don't want to listen to our governor talk about how he wants 2/3 of adults in our state to have college degrees, when he does not have a college degree himself.

I think jobs and education are important, but I think we're doing it all wrong. It shouldn't be surprising that these all came across my feed reader or FB almost all at the exact same time a day or two ago, given the mix of RSS readers I subscribe to, but I thought they all fit together nicely.

First is Roger Schank's post about the need for a different kind of university that trains people for jobs instead of training people to be professors.
Most universities have copied the “training of intellectuals and professors model of education” and have disregarded the idea that future employment might be of major concern to students. Professors can do this because they are forced by no one to teach job skills. They don’t really know much about job skills in any case. The major focus of a professor at any research university is research. Teaching is low on their priority list and teaching job skills is far very from any real concern. So, economics departments teach theories of economics and not how to run a business, and law schools teach the theory of law and not how to be a lawyer, and medical schools teach the science of the human body but not how to be a doctor. Psychology focusses on how to run an experiment, when students really want to know why they are screwed up or why they can’t get along. Mathematics departments teach stuff that no one will ever use, and education departments forget to teach people how to teach.

Still we hear that everyone must go to college. Why?
Right after that came this about regulators in California threatening to fine and shut down schools that teach students how to code and practically guarantee them a job immediately upon completion of the program.
In the learn-to-code movement, online schools and in-person courses are springing up to meet a huge need for more developers across a wide range of industries. For a price, these schools offer training in digital skills, such as software development, data science, and user experience design.

Many of these boot camps have a strong social purpose: They specialize in bringing diversity to the tech sector and in helping underemployed or unemployed Californians find jobs. Hackbright, for instance, specializes in teaching women to code so they can compete for lucrative computer engineering jobs.

These bootcamps have not yet been approved by the [the government] and are therefore being classified as unlicensed postsecondary educational institutions that must seek compliance or be forcibly shut down.
And finally this brilliant Ted Talk by Temple Grandin, where she explains how we need all kinds of minds, including both verbal and visual thinkers.
The thing is, the normal brain ignores the details. Well, if you're building a bridge, details are pretty important because it will fall down if you ignore the details. And one of my big concerns with a lot of policy things today is things are getting too abstract. People are getting away from doing hands-on stuff. I'm really concerned that a lot of the schools have taken out the hands-on classes, because art, and classes like that, those are the classes where I excelled.

What can visual thinkers do when they grow up? They can do graphic design, all kinds of stuff with computers, photography, industrial design. The pattern thinkers, they're the ones that are going to be your mathematicians, your software engineers, your computer programmers, all of those kinds of jobs. And then you've got the word minds. They make great journalists, and they also make really, really good stage actors.

The world needs different kinds of minds to work together.
...not more professors