I read Tom Friedman's book The World is Flat a couple years ago. The only thing that I remembered reading before as I went through it this time was the introduction on the golf course in Bangalore and a vague recollection that he talked about Wal-Mart a lot. I don't know if he just changed it that much when he updated and expanded it from the original or if so much information has been stuffed in my head since starting my PhD that everything else has been displaced. Perhaps it's just my focus was different this time that it all seemed new. My focus this time was on how the flat world affects education, rather than business.
One point that stuck out to me (perhaps since I had recently read Wiley's report to the Secretary of Education) was Friedman's tenth flattener, The Steroids. These are new technologies that amplify and turbocharge all the other flatteners, making collaboration possible in a "digital, mobile, virtual, and personal" way. Wiley extends and slightly changes that list to digital, mobile, connected, personal, open, and participative. The specific technologies that Friedman lists as steroids are increasing computing and storage, IM and file sharing, VoIP, videoconferencing, computer graphics, and wireless. Since the book was written, wikis and blogs and exploded in popularity and Google has popularized online office productivity software. If I'm writing in my blog or working on a googledoc, and I have to go somewhere, I just close my browser and walk away, and I can pick up where I left off on any other computer. If you don't want to keep all your information online, you can install OpenOffice and Firefox or even an entire operating system (Linux) on a USB drive and carry your OS, programs, and documents with you. You'll notice it's the open source software that can be carried around. A few years ago, I remember seeing someone that got Windows 98 running on a CD through a painstaking process. The latest versions of Windows would make it even more difficult to accomplish something like that because of DRM and because of the bloat that would make them too large, but it doesn't matter, because the OS and office productivity software are becoming irrelevant, just as printed syllabi, textbooks, and DRM-protected, non-open content are becoming irrelevant. We are becoming used to being able to collaborate online from anywhere, and the classroom should be no different.
Something else that got me thinking, as I've been reading lately about open source, open content, copyright, and licensing mechanisms, was when Friedman talked about Japan and China working together. Even with the bitter feelings the Chinese still have towards the Japanese who occupied their country and used biological weapons to kill millions of Chinese, the Japanese are outsourcing to China. The economics override the hate. That made me wonder if at some point we'll see some collaboration among Richard Stallman, Larry Lessig, Steve Ballmer, Tom Giovanetti, and Marilyn Bergman. Stallman's and Lessig's licenses, GFDL and CC, don't currently work together even though they're on the same team. The software and recording industries seem pretty much united in their opposition to anything being open, although Ballmer does claim that he likes to see open source development happen using Microsoft products. If Lessig and Stallman can't present a united front, however, how will anyone be able to withstand the attack from the MPAA/RIAA/ASCAP/Orrin Hatch/Microsoft front?
Friedman points to the Apache project as a good example of how development could happen using an open foundation. IBM worked with Apache to ensure those using Apache would be legally protected and able to use Apache for free. Anyone can now use Apache as a base to build more free stuff, just the same as they can use it as a base to serve up commercial services. He gives the mash-up example of realtors combining Google Maps with Craigslist to produce an always-current map of houses and apartments for sale or rent in a certain city. The businesses that will survive the outsourcing of many common tasks, according to Friedman, are the ones that localize, defined by Joel Cawley of IBM as "[taking] all the global capabilities that are now out there and [tailoring] them to the needs of a local community." One of the important functions of the OER movement is in providing resources available to anyone that are compatible with technological and legal frameworks that allow localization.
Happy is the man that findeth wisdom, and the man that getteth understanding.
-Proverbs 3:13
Sunday, October 28, 2007
More from IPI
After my post from just a few minutes ago, I read with interest a blog posting by Tom Giovanetti, president of IPI, where he discusses Venezuela removing intellectual property restrictions from their constitution and the rejoicing that is sure to follow from the copyleft camp. He thus mischaracterizes the copyleft movement as desiring to do away with IP protections altogether. I attempted to leave a comment on his blog, but it either got lost in the great bit bucket in the sky or is waiting for moderator approval to become public. Either way, the following is my comment to President Giovanetti, as closely as I could reproduce it from memory and remnants in my clipboard:
Your headline and comments about the copyleft folks being excited about the removal of IP protections in the Venezuela constitution display a surprising lack of understanding of the copyleft movement. Licenses like GFDL and CreativeCommons work within the currently broken copyright system to allow people to more freely share materials with others, of their own free will.
You mischaracterize Larry Lessig as promoting less-than-democratic policies. You point to countries with political and economic problems that happen to also not respect IP laws in a straw man attack that unfortunately adds to overall misunderstanding of the complicated issues at stake. I might suggest (if you have not done so already) reading Larry's book Free Culture and then making a more accurate statement on his position regarding Intellectual Property.
You might defend yourself by pointing to Richard Stallman attending a meeting with Hugo Chavez, and I don't doubt he has said something you could construe as his support of the removal of IP protection, given his outspoken activist nature. However, he has stated that he believes authors should be able to charge for their works in order to make a living if they so desire, and that a copyright system could help them do so.
Your headline and comments about the copyleft folks being excited about the removal of IP protections in the Venezuela constitution display a surprising lack of understanding of the copyleft movement. Licenses like GFDL and CreativeCommons work within the currently broken copyright system to allow people to more freely share materials with others, of their own free will.
You mischaracterize Larry Lessig as promoting less-than-democratic policies. You point to countries with political and economic problems that happen to also not respect IP laws in a straw man attack that unfortunately adds to overall misunderstanding of the complicated issues at stake. I might suggest (if you have not done so already) reading Larry's book Free Culture and then making a more accurate statement on his position regarding Intellectual Property.
You might defend yourself by pointing to Richard Stallman attending a meeting with Hugo Chavez, and I don't doubt he has said something you could construe as his support of the removal of IP protection, given his outspoken activist nature. However, he has stated that he believes authors should be able to charge for their works in order to make a living if they so desire, and that a copyright system could help them do so.
Billions Stolen?
A headline in the newspaper about the tens of billions of dollars that are lost to piracy pointed me to the Institute for Public Innovation. The IPI is dedicated to "advocating lower taxes, fewer regulations, and a smaller, less-intrusive government." It appears that one of their methods of promoting a smaller government and lower taxes is tightening copyright laws, so I thought I would link to it here in the interest of providing another viewpoint on copyright.
They make it difficult to deep link to articles on their site, so you have to just go to their homepage and look for their articles, but the following is a synopsis from their site. While I was there, I also found an article about different types of fair use that I thought was interesting, and I've included that synopsis as well.
IPI Policy Report - # 189
The True Cost of Copyright Industry Piracy to the U.S. Economy
by Stephen E. Siwek on 10/03/2007
22 Pages
Synopsis:
Using a well-established U.S. government model and the latest copyright piracy figures, this study concludes that, each year, copyright piracy from motion pictures, sound recordings, business and entertainment software and video games costs the U.S. economy $58.0 billion in total output, costs American workers 373,375 jobs and $16.3 billion in earnings, and costs federal, state, and local governments $2.6 billion in tax revenue.
IPI Issue Brief
What's "Fair"? Why Those Concerned About Copyright Fair Use Need to Say What They Mean
by Lee Hollaar, Ph.D on 04/11/2007
8 Pages
Synopsis:
While many people in the copyright debate talk about "fair use," they seldom say which uses are of concern. But without specifics, it is hard to provide balanced exceptions to copyright protection. Congress should codify "fair use of necessity" and many instances of "economic fair use" so that people will know what is allowed, while reserving fair use primarily for the "transformative" or "productive" uses that reflect the goal of copyright.
They make it difficult to deep link to articles on their site, so you have to just go to their homepage and look for their articles, but the following is a synopsis from their site. While I was there, I also found an article about different types of fair use that I thought was interesting, and I've included that synopsis as well.
IPI Policy Report - # 189
The True Cost of Copyright Industry Piracy to the U.S. Economy
by Stephen E. Siwek on 10/03/2007
22 Pages
Synopsis:
Using a well-established U.S. government model and the latest copyright piracy figures, this study concludes that, each year, copyright piracy from motion pictures, sound recordings, business and entertainment software and video games costs the U.S. economy $58.0 billion in total output, costs American workers 373,375 jobs and $16.3 billion in earnings, and costs federal, state, and local governments $2.6 billion in tax revenue.
IPI Issue Brief
What's "Fair"? Why Those Concerned About Copyright Fair Use Need to Say What They Mean
by Lee Hollaar, Ph.D on 04/11/2007
8 Pages
Synopsis:
While many people in the copyright debate talk about "fair use," they seldom say which uses are of concern. But without specifics, it is hard to provide balanced exceptions to copyright protection. Congress should codify "fair use of necessity" and many instances of "economic fair use" so that people will know what is allowed, while reserving fair use primarily for the "transformative" or "productive" uses that reflect the goal of copyright.
Thursday, October 25, 2007
Technology in the Classroom
More is expected out of students in school as more information has been made available. As technology has been added to the classroom, it has not always been integrated like it should. Educational technology is sometimes discounted as using technology just because it is there, which is unfortunate. Technology should be integrated into the classroom environment where it makes sense, along with appropriate curriculum reforms and training.
I’ve been starting to study a little about Problem Based Learning (PBL), and many of the characteristics of learning mentioned Roschelle, et al. seem to fit with the methods used in PBL, such as learning through active engagement, learning through participation in groups, learning through frequent interaction and feedback, and learning through connections to real-world contexts. They discuss students gathering data on weather and pollution to send to scientists that actually analyze the data they receive. Mistakes in gathering the data are not penalized, but rather used as a learning experience to analyze why the measurement error might have occurred. More is probably learned through failure and the analysis of that failure than always succeeding the first time. I see this with some of the parents of Scouts that I work with – although the majority of the parents couldn’t care less what we do or don’t do (which is unfortunate), there are always one or two who get very upset when everything does not work out perfectly in one of our activities or with an award the boy is working on. It’s important to make mistakes and work through them.
Some of my readings in PBL have stated that when using the PBL approach, students actually learn less than students might learn in a traditional environment, but they understand the material they do learn better and retain that information much longer. Roschelle states that “teachers who succeed in using technology often make substantial changes in their teaching style and in the curriculum they use. However, making such changes is difficult without appropriate support and commitment from school administration.” It is unlikely that an approach like PBL that might lead to decreased short-term standardized test scores could be easily justified and implemented, even though deeper learning actually occurs.
One of the most important justifications for integrating technology into a classroom is the ability to provide more immediate feedback. It is important to receive feedback quickly, if not immediately, on work that has been done. If the technology that has been implemented is customizable by the user, individuals with different learning styles will be able to take advantage of the features that will help them, and turn off the features that distract them. I try to do that in my teaching, providing multiple methods of learning and practicing the material, but it would be interesting to actually research which methods of those provided that people use to study and then how they perform.
Roschelle, J. M., Pea, R. D., Hoadley, C. M., Gordin, D. N., & Means, B. M. (2000). Changing how and what children learn in school with computer-based technologies. The Future of Children, 10(2), 76-101.
I’ve been starting to study a little about Problem Based Learning (PBL), and many of the characteristics of learning mentioned Roschelle, et al. seem to fit with the methods used in PBL, such as learning through active engagement, learning through participation in groups, learning through frequent interaction and feedback, and learning through connections to real-world contexts. They discuss students gathering data on weather and pollution to send to scientists that actually analyze the data they receive. Mistakes in gathering the data are not penalized, but rather used as a learning experience to analyze why the measurement error might have occurred. More is probably learned through failure and the analysis of that failure than always succeeding the first time. I see this with some of the parents of Scouts that I work with – although the majority of the parents couldn’t care less what we do or don’t do (which is unfortunate), there are always one or two who get very upset when everything does not work out perfectly in one of our activities or with an award the boy is working on. It’s important to make mistakes and work through them.
Some of my readings in PBL have stated that when using the PBL approach, students actually learn less than students might learn in a traditional environment, but they understand the material they do learn better and retain that information much longer. Roschelle states that “teachers who succeed in using technology often make substantial changes in their teaching style and in the curriculum they use. However, making such changes is difficult without appropriate support and commitment from school administration.” It is unlikely that an approach like PBL that might lead to decreased short-term standardized test scores could be easily justified and implemented, even though deeper learning actually occurs.
One of the most important justifications for integrating technology into a classroom is the ability to provide more immediate feedback. It is important to receive feedback quickly, if not immediately, on work that has been done. If the technology that has been implemented is customizable by the user, individuals with different learning styles will be able to take advantage of the features that will help them, and turn off the features that distract them. I try to do that in my teaching, providing multiple methods of learning and practicing the material, but it would be interesting to actually research which methods of those provided that people use to study and then how they perform.
Roschelle, J. M., Pea, R. D., Hoadley, C. M., Gordin, D. N., & Means, B. M. (2000). Changing how and what children learn in school with computer-based technologies. The Future of Children, 10(2), 76-101.
Sunday, October 21, 2007
Economics of OER
As I've been thinking and reading about the economic models of Open Educational Resources (OER), I can't help but think of some of the influential contributors and writers in the open source community like Bruce Perens and Eric Raymond. Perens discusses software and how most software is developed as infrastructure, not as a product to be sold. Many companies run the open source Apache web server and along side that run Microsoft Office and Windows for their desktop needs. They could easily use a closed-source web server or pay to have their own developed, just like they could easily use OpenOffice and Ubuntu on their desktops. It doesn't really matter, since it all comes down to someone's personal preference. Much of the commercial software used is the same as any other company is using. No one can claim that they have a strategic advantage over their competitors by choosing to use Microsoft Office, because anyone else can buy the same software and use it as well. It is non-differentiating. It may even make sense to collaborate with a competitor, making both of you more efficient. Raymond discusses how using open source software can create goodwill that attracts customers, increase the size of a market so you can grow (even if it allows your competitors to grow also), regain control over a market that you might be losing, etc. Since the Open Source community is several years ahead of the OER community, it is important to keep a collective eye on the choices made by our Open Source friends to provide some guidance about what might and might not work for open content.
Benkler discusses the marginal cost of information, which is effectively 0, with perhaps some nominal transmission costs. Although it costs nothing to pass along information to others, Benkler continues on to say that IP laws do make sense by providing an incentive for "market-based producers [to] engage in the useful activity of creating new information, knowledge, and culture." The tradeoff to be made in IP issues is to charge enough to make it worth it to be creative, but to still be accessible for reuse by others. If existing IP is inexpensive enough (or free) for others, it will be less expensive for the next author to build on that existing material. As prices of output rise, that just means that the prices of inputs rise for the next generation of material, so their output prices have to rise to match. Costs go up because either all new materials must be created (if that is even possible) or more money has to be paid out for every reused expression. Theoretically, if everyone reduced the price they charge, everyone's costs would decrease together, and with costs decreasing the price they charge for their products could be reduced...rinse...and repeat.
So how much of our OER are created to be sold, and how much is infrastructure? I'm not sure I could pinpoint a percentage. Thinking back to my bachelors and my MBA, much of the material we learned was in textbooks (the same textbooks used at Harvard or other prestigious universities, it was pointed out to us). So if I have the same textbook as a student at Harvard, and I have the same ability as him or her to go to Google or Wikipedia and read or publish information or even collaborate directly with that Harvard student, what makes the Harvard degree so different from mine? The actual content being deposited into us is non-differentiating.
USU is currently going through the re-accreditation process like all schools do once every 10 years, just checking to make sure everything is in order, to facilitate the transfer of courses between schools and to ensure that the students here can continue receiving federal financial aid by working towards an accredited degree. Not only does the entire school work to keep accreditation, but individual degrees and departments can be accredited. The result of accreditation basically ends up being that degrees should be mostly interchangeable, because we're following the same curriculum. There are obviously other reasons why certain schools are more prestigious than others. After Boise State University's perfect season, capped off by a win over Oklahoma in a BCS bowl, a survey showed that the national recognition for their football team had a positive impact on the school's reputation for academics and research (although the two are probably not related). There will always be something else to differentiate schools on, but it does not appear to be the content taught in the classroom.
Many of the textbooks we use are written by academics who are already being paid to develop course material, so they're double-dipping according to my calculations when they get paid to write a book. Writing research articles and book chapters already fits into the promotion and tenure process as well as their duty to teach their classes. Anyway, giving them the benefit of the doubt, if they were to contribute to a bank of learning objects or OER of some other form like Wikipedia, they might lose money from book sales. But how much of the money from book sales actually goes back to the author? Perhaps something like 10-15% of the new book market, so maybe $10 per book? One author goes so far as to say that the reason textbook prices are so high is because of the evil used book market (even comparing the sales of used books to pirated movies and music). Dr. Roediger talks about "wear and tear" on authors (apparently including himself) who are constantly releasing new editions of books in order to continue receiving their cut of new book sales "until laws are changed to prevent the organized sale of used books". He even mentions his temptation to trade off between two versions of a book to save himself the time of revising again and again every two or three years...right in the middle of his observations of other behavior he considers to be unethical like sales of complimentary copies of books or the bundling/unbundling of workbooks and CDs. That man needs to be slapped, er, I mean introduced to the wonderful world of OER. Seriously, he needs to be sent a special invitation to next year's Open Education conference. Look at the obvious frustration and wasted time that he is spending all to make a few extra bucks, when he could be releasing his materials so others can remix and add more insight and check for mistakes. The time he spends is greatly reduced but the quality of the result is higher. Dr. Roediger is already being paid to research and develop teaching materials, so let him get back to being productive by developing new ideas and actually teaching, rather than being so concerned about all the evil bookstores taking advantage of both students and authors.
It seems that everything would run smoother and more efficiently without having to worry about tracking all the IP issues inherent in creating closed content.
If the content becomes free, where does that leave degrees that are based on mastery of that free content? That one is going to have to wait for another day, but I imagine it will come down to paying for the actual differentiating features of an institution.
Benkler discusses the marginal cost of information, which is effectively 0, with perhaps some nominal transmission costs. Although it costs nothing to pass along information to others, Benkler continues on to say that IP laws do make sense by providing an incentive for "market-based producers [to] engage in the useful activity of creating new information, knowledge, and culture." The tradeoff to be made in IP issues is to charge enough to make it worth it to be creative, but to still be accessible for reuse by others. If existing IP is inexpensive enough (or free) for others, it will be less expensive for the next author to build on that existing material. As prices of output rise, that just means that the prices of inputs rise for the next generation of material, so their output prices have to rise to match. Costs go up because either all new materials must be created (if that is even possible) or more money has to be paid out for every reused expression. Theoretically, if everyone reduced the price they charge, everyone's costs would decrease together, and with costs decreasing the price they charge for their products could be reduced...rinse...and repeat.
So how much of our OER are created to be sold, and how much is infrastructure? I'm not sure I could pinpoint a percentage. Thinking back to my bachelors and my MBA, much of the material we learned was in textbooks (the same textbooks used at Harvard or other prestigious universities, it was pointed out to us). So if I have the same textbook as a student at Harvard, and I have the same ability as him or her to go to Google or Wikipedia and read or publish information or even collaborate directly with that Harvard student, what makes the Harvard degree so different from mine? The actual content being deposited into us is non-differentiating.
USU is currently going through the re-accreditation process like all schools do once every 10 years, just checking to make sure everything is in order, to facilitate the transfer of courses between schools and to ensure that the students here can continue receiving federal financial aid by working towards an accredited degree. Not only does the entire school work to keep accreditation, but individual degrees and departments can be accredited. The result of accreditation basically ends up being that degrees should be mostly interchangeable, because we're following the same curriculum. There are obviously other reasons why certain schools are more prestigious than others. After Boise State University's perfect season, capped off by a win over Oklahoma in a BCS bowl, a survey showed that the national recognition for their football team had a positive impact on the school's reputation for academics and research (although the two are probably not related). There will always be something else to differentiate schools on, but it does not appear to be the content taught in the classroom.
Many of the textbooks we use are written by academics who are already being paid to develop course material, so they're double-dipping according to my calculations when they get paid to write a book. Writing research articles and book chapters already fits into the promotion and tenure process as well as their duty to teach their classes. Anyway, giving them the benefit of the doubt, if they were to contribute to a bank of learning objects or OER of some other form like Wikipedia, they might lose money from book sales. But how much of the money from book sales actually goes back to the author? Perhaps something like 10-15% of the new book market, so maybe $10 per book? One author goes so far as to say that the reason textbook prices are so high is because of the evil used book market (even comparing the sales of used books to pirated movies and music). Dr. Roediger talks about "wear and tear" on authors (apparently including himself) who are constantly releasing new editions of books in order to continue receiving their cut of new book sales "until laws are changed to prevent the organized sale of used books". He even mentions his temptation to trade off between two versions of a book to save himself the time of revising again and again every two or three years...right in the middle of his observations of other behavior he considers to be unethical like sales of complimentary copies of books or the bundling/unbundling of workbooks and CDs. That man needs to be slapped, er, I mean introduced to the wonderful world of OER. Seriously, he needs to be sent a special invitation to next year's Open Education conference. Look at the obvious frustration and wasted time that he is spending all to make a few extra bucks, when he could be releasing his materials so others can remix and add more insight and check for mistakes. The time he spends is greatly reduced but the quality of the result is higher. Dr. Roediger is already being paid to research and develop teaching materials, so let him get back to being productive by developing new ideas and actually teaching, rather than being so concerned about all the evil bookstores taking advantage of both students and authors.
It seems that everything would run smoother and more efficiently without having to worry about tracking all the IP issues inherent in creating closed content.
If the content becomes free, where does that leave degrees that are based on mastery of that free content? That one is going to have to wait for another day, but I imagine it will come down to paying for the actual differentiating features of an institution.
Saturday, October 20, 2007
Great Men
At the North Logan Pumpkin Walk, the theme this year was "Those Were the Days." Here's a picture of one of the scenes, with some great men in history: Martin Luther King Jr., George Washington, Albert Einstein, and ... Bob Marley.
Wednesday, October 17, 2007
Shift Happens
Here are a couple of videos that I've watched lately. The first is designed to get you thinking about globalization and how we are preparing our kids for the changes to come. It's about 8 minutes long.
This second video is about 20 minutes long, but makes a very good point and is very entertaining. Sir Ken Robinson at TED in 2006 discusses the importance of cultivating creativity in our kids rather than educating it out of them.
It's long, but it's worth it.
This second video is about 20 minutes long, but makes a very good point and is very entertaining. Sir Ken Robinson at TED in 2006 discusses the importance of cultivating creativity in our kids rather than educating it out of them.
It's long, but it's worth it.
Friday, October 12, 2007
Licensing
For this week's OpenEd class, I jumped on the readings early in the week, meaning to write early. When I looked at the questions, though, I got a little stuck, so I've had licensing terms floating around in my head for the last few days, as I've pondered: What is missing from CC, and how can we possibly make CC and GFDL content compatible with each other?
Stian and Greg both mention that a non-BY license would be nice. I would agree with that. That got me thinking, since Attribution is the most consistent and simple term across all the CC licenses. Obviously the implementation of a SA-only license would be pretty straight forward to implement, but then I wonder if without making a declaration to put your work into the public domain or selling the copyright to CC like in their Founder's Copyright, could you simply implement a license that allowed anyone to use your content however they like (similar to BY, but without the actual BY)? Would it matter that it would have the same effect as declaring it to be in the public domain? Would it have the same effect?
The CC license could possibly benefit by adding a Notification clause, which could go with or without any of the existing licenses, asking that those who remix or make certain other uses of the content notify the creator that their work has been mixed into something else. Something like that might be too difficult to understand and costly to implement. CC works because it is simple, and I believe it is important to keep it that way.
Looking to the software world, the shareware model comes to mind. We've all probably seen websites with a little PayPal donation box or downloaded software like WinZIP or others that can be distributed for free but require payment to continue using it. Pollock discusses the Magnatune music label that allows users to choose what price to pay for their album downloads. There's a restaurant in Salt Lake City, One World Cafe, soon to open another restaurant in New York City, where guests pay whatever they feel the meal was worth, and they can help do dishes or serve food to work off their meal if they need to. I don't know how well it would work with open content. You'd run into issues with both NC and SA with a shareware license, but maybe it would be worth it to CC if they provided a service to run payments through their system for a nominal 5% cost recovery fee.
The shareware idea there is kind of a brainstorming idea, not well thought out by me yet. Our GNU friends have a page dedicated to the various software licenses that are available and how they relate to each other. Perhaps another of the software licenses will spur an idea of how to license content differently. When it comes to GNU, however, something about them just makes me slightly uncomfortable. Perhaps it was the 20 pages about why we shouldn't say Linux, but GNU/Linux, when referring to that particular operating system. I mean, I enjoy and appreciate Open Source software, but perhaps not so much that I really care what the difference is between Free Software and Open Source Software (although I could very easily explain the difference between Free Software and Freeware if you needed another clue as to my location on the geek-continuum). Their approach seems to be one of a fundamentalist, with only one right way to license. Stallman and friends get pretty worked up about whether a given license is really open and get upset about CC's NC clause (not because NC is unclear or difficult to enforce, but because it is unfair to disallow commercial use). The CC licenses give more choice to a creator than does the GFDL, because of the range of available licenses. That additional choice adds complexity and incompatibilities, though.
I hope something can be worked out so these licenses can be more compatible with each other, but it seems unlikely to happen quickly. In the mean time, it may just be a liberal application of fair use that allows a mixture among the various incompatible licenses, along with a sprinkle of a gentleman's agreement not to sue.
Stian and Greg both mention that a non-BY license would be nice. I would agree with that. That got me thinking, since Attribution is the most consistent and simple term across all the CC licenses. Obviously the implementation of a SA-only license would be pretty straight forward to implement, but then I wonder if without making a declaration to put your work into the public domain or selling the copyright to CC like in their Founder's Copyright, could you simply implement a license that allowed anyone to use your content however they like (similar to BY, but without the actual BY)? Would it matter that it would have the same effect as declaring it to be in the public domain? Would it have the same effect?
The CC license could possibly benefit by adding a Notification clause, which could go with or without any of the existing licenses, asking that those who remix or make certain other uses of the content notify the creator that their work has been mixed into something else. Something like that might be too difficult to understand and costly to implement. CC works because it is simple, and I believe it is important to keep it that way.
Looking to the software world, the shareware model comes to mind. We've all probably seen websites with a little PayPal donation box or downloaded software like WinZIP or others that can be distributed for free but require payment to continue using it. Pollock discusses the Magnatune music label that allows users to choose what price to pay for their album downloads. There's a restaurant in Salt Lake City, One World Cafe, soon to open another restaurant in New York City, where guests pay whatever they feel the meal was worth, and they can help do dishes or serve food to work off their meal if they need to. I don't know how well it would work with open content. You'd run into issues with both NC and SA with a shareware license, but maybe it would be worth it to CC if they provided a service to run payments through their system for a nominal 5% cost recovery fee.
The shareware idea there is kind of a brainstorming idea, not well thought out by me yet. Our GNU friends have a page dedicated to the various software licenses that are available and how they relate to each other. Perhaps another of the software licenses will spur an idea of how to license content differently. When it comes to GNU, however, something about them just makes me slightly uncomfortable. Perhaps it was the 20 pages about why we shouldn't say Linux, but GNU/Linux, when referring to that particular operating system. I mean, I enjoy and appreciate Open Source software, but perhaps not so much that I really care what the difference is between Free Software and Open Source Software (although I could very easily explain the difference between Free Software and Freeware if you needed another clue as to my location on the geek-continuum). Their approach seems to be one of a fundamentalist, with only one right way to license. Stallman and friends get pretty worked up about whether a given license is really open and get upset about CC's NC clause (not because NC is unclear or difficult to enforce, but because it is unfair to disallow commercial use). The CC licenses give more choice to a creator than does the GFDL, because of the range of available licenses. That additional choice adds complexity and incompatibilities, though.
I hope something can be worked out so these licenses can be more compatible with each other, but it seems unlikely to happen quickly. In the mean time, it may just be a liberal application of fair use that allows a mixture among the various incompatible licenses, along with a sprinkle of a gentleman's agreement not to sue.
Thursday, October 11, 2007
Artifacts
Norman discusses artifacts as tools or artificial devices that enhance our lives, with particular emphasis on human cognitive performance. The part that stands out for me in the chapter referenced below is the discussion of how, in spite of the power and importance of artifacts in our lives, much current research focuses more on the unaided mind. Rarely if ever do we do anything with just the unaided mind.
In many classes where tests and quizzes make up a significant portion of the grade, students are required to perform without the use of any artifacts. Occasionally tests are open-book or allow a note card to be used. I can personally think of times where I remember reading something and I can picture exactly where it is on the page and the picture next to it, but I can’t quite remember the concept well enough to answer the question. Given 20 seconds with my book, I could answer it correctly. There seems to be a disconnect between both research and teaching and with the way we actually work in real life. David Wiley points out that disconnect to the Secretary of Education’s panel on the future of higher ed. In university courses, students are paying a lot of money to be stuck in a classroom, reading out of date printed materials, listening to the teacher give generic instruction to the whole class, without being allowed to collaborate with others. As soon as students leave class, however, they are used to quickly jumping online with a cell phone or laptop, finding current/open/free information on exactly what they need at that moment, and sharing the new-found information with friends via instant messaging or blog postings.
As part of my teaching, I cover some basic information literacy skills. The goal in information literacy is the ability to use tools to find and evaluate information. You don’t have to memorize a long list of journals; you simply have to be able to use the journal databases effectively. That was evident in Norman’s explanation of the scope of artifacts, that from a personal point of view, using artifacts changes the task to be completed; but from a system point of view, the task to be completed is the same, just that the task is done better, faster, etc. That’s one of those things that makes perfect sense when reading it, but it takes someone to actually state the obvious to think about it and draw the connections.
I was also interested in the discussion on interface between people and artifacts. In designing software or websites, it is important for the system to be intuitive. I wonder sometimes why it is that I can sit down with a new piece of software and just know what to do, where another person might struggle to figure it out. Is it my experience and training that gives me an advantage or is it some innate difference in how our brains work? That difference in users is one important reason, I believe, that many effective computer-based tools have multiple ways of performing the same task (keyboard shortcuts, buttons, right click menus, etc.). The many Web 2.0 tools that have exploded in popularity are likely used by so many people because they are so intuitive, although the social pressure and support in using them cannot be discounted.
Norman, D. (1991). Chapter 2: Cognitive artifacts. In J. M. Carroll (Ed.), Designing Interaction: Psychology at the Human-Computer Interface. Cambridge: Cambridge University Press.
In many classes where tests and quizzes make up a significant portion of the grade, students are required to perform without the use of any artifacts. Occasionally tests are open-book or allow a note card to be used. I can personally think of times where I remember reading something and I can picture exactly where it is on the page and the picture next to it, but I can’t quite remember the concept well enough to answer the question. Given 20 seconds with my book, I could answer it correctly. There seems to be a disconnect between both research and teaching and with the way we actually work in real life. David Wiley points out that disconnect to the Secretary of Education’s panel on the future of higher ed. In university courses, students are paying a lot of money to be stuck in a classroom, reading out of date printed materials, listening to the teacher give generic instruction to the whole class, without being allowed to collaborate with others. As soon as students leave class, however, they are used to quickly jumping online with a cell phone or laptop, finding current/open/free information on exactly what they need at that moment, and sharing the new-found information with friends via instant messaging or blog postings.
As part of my teaching, I cover some basic information literacy skills. The goal in information literacy is the ability to use tools to find and evaluate information. You don’t have to memorize a long list of journals; you simply have to be able to use the journal databases effectively. That was evident in Norman’s explanation of the scope of artifacts, that from a personal point of view, using artifacts changes the task to be completed; but from a system point of view, the task to be completed is the same, just that the task is done better, faster, etc. That’s one of those things that makes perfect sense when reading it, but it takes someone to actually state the obvious to think about it and draw the connections.
I was also interested in the discussion on interface between people and artifacts. In designing software or websites, it is important for the system to be intuitive. I wonder sometimes why it is that I can sit down with a new piece of software and just know what to do, where another person might struggle to figure it out. Is it my experience and training that gives me an advantage or is it some innate difference in how our brains work? That difference in users is one important reason, I believe, that many effective computer-based tools have multiple ways of performing the same task (keyboard shortcuts, buttons, right click menus, etc.). The many Web 2.0 tools that have exploded in popularity are likely used by so many people because they are so intuitive, although the social pressure and support in using them cannot be discounted.
Norman, D. (1991). Chapter 2: Cognitive artifacts. In J. M. Carroll (Ed.), Designing Interaction: Psychology at the Human-Computer Interface. Cambridge: Cambridge University Press.
Sunday, October 7, 2007
Copyright and the Public Domain
Copyright has obviously gotten out of hand. Even ignoring the complexities of differences among laws from various countries, which you'll have to deal with anytime you cross international lines, just within the US, copyright law has become convoluted and complex. It is no surprise that the lawyers are making a killing off helping all these authors comply with the law and protect themselves from each other, all the while increasing costs and reducing creativity.
Unfortunately, when I teach about copyright as part of a larger unit, generally the most I have time for is to briefly explain that copyright protects your creation as soon as it is put in tangible form without any additional registration or notification required and that fair use allows some exception to this rule. I guess I go slightly (but not much) deeper than that, also briefly mentioning work-made-for-hire, open source vs all rights reserved software licensing, citing your sources, plagiarism, and the DMCA. If I have time I try to at least point out Creative Commons by showing a CC search on Flickr.
Other than government-produced works, 80-plus year old works, some works where someone forgot to renew the copyright or inadvertently left off the copyright notice back when the law required either of those things, or people who specifically release their materials into the public domain, we are left with tons of material that can't be reused by anyone without a big hassle...until the GNU and CC folks came along.
Ideally, we would have much more content in the public domain, but the GNU and CC open licenses allow for more sharing of content while protecting the author's copyright claim. So would we be better off by converting these open licenses over to public domain? That is, if more works moved from all rights reserved status to the public domain, we should theoretically see an increase in creativity and a decrease in production costs, so wouldn't it follow that by changing these open licenses (which contain restrictions) to public domain (with no restrictions) we would see a similar change? I don't necessarily think we would be better off.
The open licenses provide some benefits that the public domain does not, given our current IP climate of extended-length, automatic copyright. Copyright exists, according to the Constitution, to encourage new works by granting the right to exclusive use of those new works to their creator. If the only two choices given an author were infinite full copyright or completely giving up all rights to a creation, I believe we would see less overall sharing. The reason the open licenses work is that they allow the creator to retain the rights they care about, while allowing others to use their material in certain ways. This compromise is the strength of the open licenses.
If copyright reverted to its original term of 14 years or to a model of requiring registration or notice to retain copyright, the open license issue would be moot. Many materials would quickly become public domain and we wouldn't need alternate licenses for them. I believe that many people that are willing to voluntarily apply a "some rights reserved" license would be unwilling to give up all their rights. Even the simplest case attribution-only license is important so that acknowledgement is given to the author; that is not required of public domain materials. An author is unlikely to review his or her own materials on a regular basis to decide which old materials should be released to the public domain. He or she will likely decide that when it is published. The open licenses allow the author to set it and forget it.
Should copyright law be scaled back so works enter the public domain faster? Yes. Is that going to happen? No. Would we be better off by releasing our creations into the public domain instead of using an open license? That depends. If people will actually do it, then yes. If they reserve all rights, because they don't want to cede some, then no.
Unfortunately, when I teach about copyright as part of a larger unit, generally the most I have time for is to briefly explain that copyright protects your creation as soon as it is put in tangible form without any additional registration or notification required and that fair use allows some exception to this rule. I guess I go slightly (but not much) deeper than that, also briefly mentioning work-made-for-hire, open source vs all rights reserved software licensing, citing your sources, plagiarism, and the DMCA. If I have time I try to at least point out Creative Commons by showing a CC search on Flickr.
Other than government-produced works, 80-plus year old works, some works where someone forgot to renew the copyright or inadvertently left off the copyright notice back when the law required either of those things, or people who specifically release their materials into the public domain, we are left with tons of material that can't be reused by anyone without a big hassle...until the GNU and CC folks came along.
Ideally, we would have much more content in the public domain, but the GNU and CC open licenses allow for more sharing of content while protecting the author's copyright claim. So would we be better off by converting these open licenses over to public domain? That is, if more works moved from all rights reserved status to the public domain, we should theoretically see an increase in creativity and a decrease in production costs, so wouldn't it follow that by changing these open licenses (which contain restrictions) to public domain (with no restrictions) we would see a similar change? I don't necessarily think we would be better off.
The open licenses provide some benefits that the public domain does not, given our current IP climate of extended-length, automatic copyright. Copyright exists, according to the Constitution, to encourage new works by granting the right to exclusive use of those new works to their creator. If the only two choices given an author were infinite full copyright or completely giving up all rights to a creation, I believe we would see less overall sharing. The reason the open licenses work is that they allow the creator to retain the rights they care about, while allowing others to use their material in certain ways. This compromise is the strength of the open licenses.
If copyright reverted to its original term of 14 years or to a model of requiring registration or notice to retain copyright, the open license issue would be moot. Many materials would quickly become public domain and we wouldn't need alternate licenses for them. I believe that many people that are willing to voluntarily apply a "some rights reserved" license would be unwilling to give up all their rights. Even the simplest case attribution-only license is important so that acknowledgement is given to the author; that is not required of public domain materials. An author is unlikely to review his or her own materials on a regular basis to decide which old materials should be released to the public domain. He or she will likely decide that when it is published. The open licenses allow the author to set it and forget it.
Should copyright law be scaled back so works enter the public domain faster? Yes. Is that going to happen? No. Would we be better off by releasing our creations into the public domain instead of using an open license? That depends. If people will actually do it, then yes. If they reserve all rights, because they don't want to cede some, then no.
Tuesday, October 2, 2007
Cinematography
Last week with the Scouts, we filmed a short video entitled "Be an Example". I left it up to them to pick a topic and once they picked the topic, I tried to give them a little advice without getting into their way too much, letting them control their own project. I ran the camera (since it was mine and I didn't want them breaking it plus we leaders didn't want to be in the video). I also did the video editing afterwards, since there's no way I could have gotten them to sit down and do the editing even on a 1 minute video like this. It took a full hour to actually film two one-minute takes, giving us a 2 minute video, including the outtakes. The idea of the video is to be an example to your friends and not drink alcohol, with one guy declining the beer that others are drinking and a friend decides not to drink anymore either. I tried in my editing to stick closely to how they portrayed it, but to provide a few extra hints in places where the sound got muffled. Given the short amount of time I had for editing, my novice skills using iMovie, and the 12-13 year old actors/directors with ADHD, I think it turned out okay.
Of course, I could have picked a different topic for them or written the script myself and commanded that they read from it or insisted that they follow the Cinematography Merit Badge requirements to the letter, but it wouldn't have turned out nearly as creative or fun or interesting. It also wouldn't have been theirs. Given the Storming stage we are in as a group, and no sign of leaving it anytime soon, conflict and more importantly creativity is high, so we take advantage of what we can.
Of course, I could have picked a different topic for them or written the script myself and commanded that they read from it or insisted that they follow the Cinematography Merit Badge requirements to the letter, but it wouldn't have turned out nearly as creative or fun or interesting. It also wouldn't have been theirs. Given the Storming stage we are in as a group, and no sign of leaving it anytime soon, conflict and more importantly creativity is high, so we take advantage of what we can.
Subscribe to:
Posts (Atom)