Wednesday, October 30, 2019

Plasmodium Falciparum - Causative Agent of Severe Malaria Research Paper

Plasmodium Falciparum - Causative Agent of Severe Malaria - Research Paper Example Even in today’s medically advanced world, malaria remains one of the primary concerns of researchers and health practitioners in areas where the disease is endemic. Despite immense research and availability of advanced healthcare facilities, malaria has a high mortality rate causing a million deaths each year and infecting a total of 300 million people around the world. The purpose of this research paper is to provide information on structure, aetiology and other aspects of malaria caused by Plasmodium falciparum. Plasmodium Falciparum Mode of transmission The female Anopheles mosquito serves as a vector and a definitive host for Plasmodium falciparum. The two phases of the lifecycle of plasmodia are the sexual cycle and asexual cycle. The sexual phase occurs in female mosquito and asexual phase is completed in Humans. Due to the production of sporozoites, the sexual cycle is known as sporogony while on the other hand, the asexual cycle is known as schizogony because of the pr oduction of schizonts. Plasmodium sporozoites are introduced into intermediate host i.e. humans, through the saliva of the infected mosquito when it bites an individual. Within 30 minutes, the sporozoites enter the hepatocytes where multiplication and differentiation are initiated resulting in the conversion sporozoites into merozoites (Levinson et al 1999). Physiology and lifecycle The merozoites produced in the liver are released into the peripheral circulation. Once released, the merozoites enter the red blood cells in order to mediate the erythrocytic phase of the disease. In erythrocytic phase, merozoites transform into a ring shaped trophozoite. Later, the trophozoite develops into an amoeboid form which further grows into a schizont. Each schizont is filled with several merozoites. The red blood cells burst and release the merozoites into general circulation where they infect other red blood cells in a similar manner. The release of merozoites into the blood is the cause of r ecurrent typical symptoms seen in malaria caused by Plasmodium falciparum. The development of male and female gametocytes leads to the initiation of a sexual cycle of P. falciparum in the human red blood cells. When a female Anopheles mosquito takes a blood meal the gametocytes are sucked up and lead to the production of female macrogamete and eight male microgametes which have an appearance similar to that of sperm cells. The male and female gametes undergo fertilization to form a diploid zygote. The process of differentiation occurs and converts the diploid zygote into a motile ookinete. The ookinete forms a hole in the gut wall and converts into many haploid sporozoites. The sporozoites leave the gut wall and enter the salivary glands of the Anopheles mosquito. Once, the sporozoites enter the salivary glands their sexual cycle is completed and they are now ready to cause malaria when the mosquito bites a human (Levinson et al 1999). Diagnosis Thick and thin Giemsa stain smears ar e observed under the microscope in order to determine the presence of the parasite in the blood. To determine the presence of the parasite, thick Giemsa smear is used while on the other hand thin smears are used for the identification of parasite species. The blood sample from an individual suffering from malaria show characteristic ring shaped trophozoites residing within the erythrocytes.     

Monday, October 28, 2019

Imelda Marcos Essay Example for Free

Imelda Marcos Essay Imelda R. Marcos (born Imelda Remedios Visitacion Romualdez on July 2, 1929) is a Filipino politician and widow of former Philippine President Ferdinand Marcos. Upon the ascension of her husband to political power, she held various positions to the government until 1986. She is the first politician elected as member of the Philippine legislature in three geographical locations (Manila, Leyte, Ilocos Norte). In 2010, she was elected to become a member of the House of Representatives to represent Ilocos Nortes second district. She is sometimes referred to as the Steel Butterfly or the Iron Butterfly. [1][2] She is often remembered for symbols of the extravagance of her husbands political reign, including her collection of 2,700 pairs of shoes. [3] Ancestry Marcos was born in Manila, Philippines. Her paternal ancestors were wealthy, landed and prominent, and claimed to have founded the town of Tolosa, Leyte. The Lopezes were descended from the Spanish friar and silversmith Don Francisco Lopez, originally from Granada, in the Andalusian region of Spain. Together with Fray Salustiano Buz, he arrived by way of Acapulco to build Roman Catholic missions in the island provinces of Samar and Leyte (Buz would establish his home base in Palapag, Samar, the exit-entry point of the Manila Galleons in the Visayas islands). [4] Early life and career Her branch of the family was not political. Her father, Vicente Orestes Romualdez, a law professor at Saint Pauls College and the administrator of the Romualdez Law Offices founded by his brother (Imeldas Uncle), Philippine Supreme Court Justice Norberto Lopez Romualdez, was a scholarly man more interested in music and culture than public life. He was a traditionalist, preferring to teach in Spanish while the rest of the students and faculty spoke English and Tagalog. Marcos had a younger brother, Benjamin Romualdez (1930-2012). [5] Her mother, Remedios Trinidad y de Guzman or Remedios T. Romualdez, a former boarder at the Asilo de San Vicente de Paul (Looban Convent) in Paco, Manila, was said to have been born out of wedlock, the child of a friar. [6] Remedios was from the town of Baliuag, Bulacan, and her own mother was from Capiz.

Saturday, October 26, 2019

The Lance :: Essays Papers

The Lance The lance, a staff weapon was used during the chivalric era. The lance was mainly used during a tournament. Tournaments were held as a type of competition for knights. The tournaments served as a source of entertainment and also a means to keep knights fit and in practice. During the tournament if the lance began to break or splinter one point was scored. If the lance broke it would be replaced with a new one. If the lance broke again, the two knights would dismount horses and begin to fight with swords. The tournaments were encouraged to contain chivalric behavior and attitude. The lance was also used in warfare. At the beginning of any battle the two sides would line up and begin to charge at each other holding out a lance to knock the opposition off of their horses. The jousting was primarily done at the beginning of a battle, and then the knights would dismount and charge with swords, daggers, or axes. The lance could also the used on the ground, but not nearly as successful as it would be used on a horse. A lance was used in jousting. It was about 11 feet long with a 6-inch blade, and the blade was shaped like a leaf. It was used to throw the other knight off his horse. More armor was created after the wide use of the lance. The vamplate was created to protect the hand and arm, and breastplates to help stabilize the knight as he galloped on his horse. The lance itself is very chivalric. It's large length, and weight made it challenge for knights to fight with one. It was a challenge in itself to use a lance, and anyone who could do so with elegance was considered chivalric. It took great practice to use a lance successfully. A quintain was used as target practice for the one using the lance. As most chivalric weapons a great deal of practice was required to become successful with a lance. The chivalric era contained many weapons, but the lance truly symbolizes the chivalric era.

Thursday, October 24, 2019

Alternative Fuel :: Hydrogen Fuel H2/02

Hydrogen can be "packaged" in several ways, as a fuel gas in a H2/02 powered engine or the newly devised solid state pellet of hydrogen isotopes that contains about the equivalent of 5000 cubic feet of hydrogen and is broken down and releases gas into the second chamber where it goes to the engine for use. There are many ways to get pure hydrogen out of many compounds using methods such as electrolysis and chemical reactions. One of the easiest ways is using a chemical reaction. Simple chemicals (aluminum,sodium hydroxide, and water) can be reacted in the home to produce heavy hydrogen to power your furnace or your hot water heater . No electrical power at all is required. The reaction also gives off a tremendous amount of heat. Even the waste heat could be captured for heating the house. The resulting sodium aluminate is harmless and could be collected at recoiling centers for complete acid/base neutralization. This way is a simpler way than electrolysis produce hydrogen for heating the home, because in a automobile it would be harder to do. Electrolysis is another way to produce hydrogen electronically. It is a way that I am more familiar with because I do it quite a bit in my room and have done several experiments with it. Electrolysis will produce a 2:1 ratio of hydrogen to oxygen out of water. higher voltages will give you faster collection. With a 12-volt battery it took around a half an hour to get a quarter of a mountain dew bottle filled with a catalyst of a small amount of Baking Soda. I used it because it was cheap and I knew it worked. Another time I used a 75 volt / 2 amp power supply with a catalyst of 2 drops of sulfuric acid to a pint of water and the result was very differing from the last time. I filled the whole mountain dew bottle in less than 6 minutes. All of that gas came from a little less than a drop of water(when I light it off there was only a little spec of water on the

Wednesday, October 23, 2019

Regrets Case Essay

I regret not doing a lot of things in life like telling that bully in the third grade that I was not afraid of him or telling a teacher that I really appreciated all she had done for me.   The one regret that I have that has really changed where I am in my life when I messed up an opportunity that I was given to be a â€Å"third key† manager at a high end retail establishment shortly after graduating from high school. I did not get the position because I failed a urine screening for drugs.   I had smoked a little pot (three puffs to be exact) a few days prior to my interview in â€Å"celebration† of this opportunity as my friends and I hung out and partied.   I had no idea at the time that a little pleasurable puff of paradise (39) from this marijuana cigarette would end the most promising career opportunity that I may ever have had.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   After gradating from high school, I was on top of the world and loving life.   My life was like a box of chocolates (25), full of delicious and hidden opportunities.   I landed an interview for an amiable administrator’s (57) position and everything went well during the interview.   I was offered the job upon condition of a passed drug urine screening.   Drug urine screening?(12).   Say what?!   I had never had one of those before, probably because I had never had a job worthy enough of expelling urine for someone to analyze.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Not only was I highly unprepared for this condition of my employment, but I was actually surprisingly very sure that I was going to pass the screening despite the fact that I had smoked some marijuana three days earlier (58).   At the time, I was unaware of how long drug traces remained in my system, so I thought I would be good to go, but deep down, something did not feel quite right.   Another part of me told me that I was going to lose this once in a lifetime opportunity (55) due to a stupid choice that I made one night to party and get high with my friend Andy (69).   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Why didn’t I just say no, as all the commercials and billboards had been urging me to do for years (24)?   There is only one logical reason, and that is because I was stupid.   No, that is not a logical reason; that is a lame excuse.   I did it because I was selfish (17).   I was more worried about my image at the time than my own future (19).   Selfish†¦selfish†¦selfish (47).   And stupid.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Losing the job that I had wanted so badly and that could have made a real difference in my life was highly disturbing.   I disappointed myself in the worst way.   I ended up working several mediocre jobs after that.   However, as the old saying goes, when life gives you lemons, make lemonade (51).   So, I made lemonade all the way to college.   Here I can advance my pool of job opportunities (20) and with the pool of knowledge that I already posses from previous mistakes, I will be able to do more than make lemonade.   I can make a better career for myself and can become a better person.

Tuesday, October 22, 2019

The amazing six sense essays

The amazing six sense essays The 1999 movie, The Sixth Sense, both written and directed by M. Night Shyamalan, with its amazing final twist, is one of the best thrillers ever made. Bruce Willis and 10-year-old Haley Joel Osment make an incredible connection that is rarely seen in other movies. It is to Willis credit to allow the little boy to shine in virtually every scene. Bruce Willis plays Dr. Malcolm Crowe, a well-known child psychologist who is living a happy life with his understanding wife (Ollivia Williams). One night a man breaks into their house and claims to be a former patient of Malcolms. He reminds Malcolm that he was always scared and Malcolm failed to help him. He fires a gun at Dr. Crowe and then shoots himself in the head. A few months pass, and Malcolm recovers. But he is not the same person that he used to be. His career is turning into a failure and his marriage seems to be falling apart. Meanwhile, he takes an interest in the case of Cole Sear (Haley Joel Osment), an 8-year-old boy whose case is a lot like the former patient who shot him. Cole suffers from a mood disorder and spends a lot of his time alone at church. He is called a freak by his classmates at school. Cole asks Dr. Crowe to help him not to be scared. His divorced mom (Toni Collete) often notices scratches on Coles body. Malcolm spends a lot of his time with Cole and tries very hard to help him. Finally, Cole decides to tell Malcolm his secret. He can see the dead. They often come to him and sometimes physically hurt him. But in order to better understand Coles case, Malcolm goes back to study the case of his former patient. What he finds out, and the final twist of the story are things that should be kept secret for people who have yet to see the amazing The Sixth Sense. The Sixth Sense is the movie thats worth sticking with, writes Jay Carr in Boston Globe. He states that everything remains unc ...

Monday, October 21, 2019

Geography of the Worlds Largest Oil Spills

Geography of the World's Largest Oil Spills On April 20, 2010, a large oil spill began in the Gulf of Mexico after an explosion on a British Petroleum (BP) oil drilling rig there called Deepwater Horizon. In the weeks following the oil spill, the news was dominated by depictions of the spill and its growing size as oil continued to leak from an underwater well and pollute the Gulf of Mexicos waters. The spill harmed wildlife, damaged fisheries and severely hurt the overall economy of the Gulf region. The Gulf of Mexico oil spill was not fully contained until late July 2010 and throughout the duration of the spill it was estimated that 53,000 barrels of oil per day were leaked into the Gulf of Mexico. In total almost 5 million barrels of oil were released which makes it the largest accidental oil spill in the worlds history.Oil spills like the one in the Gulf of Mexico are not uncommon and many other oil spills have occurred in the worlds oceans and other waterways in the past. The following is a list of fifteen major oil spills (Gulf of Mexico included) that have taken place around the world. The list is organized by the final amount of oil that entered waterways. 1) Gulf of Mexico/BP Oil Spill Location: Gulf of Mexico Year: 2010 Amount of Oil Spilled in Gallons and Liters: 205 million gallons (776 million liters) 2) Ixtoc I Oil Well Location: Gulf of Mexico Year: 1979 Amount of Oil Spilled in Gallons and Liters: 140 million gallons (530 million liters) 3) Atlantic Empress Location: Trinidad and Tobago Year: 1979 Amount of Oil Spilled in Gallons and Liters: 90 million gallons (340 million liters)4) Fergana Valley Location: Uzbekistan Year: 1992 Amount of Oil Spilled in Gallons and Liters: 88 million gallons (333 million liters)5) ABT Summer Location: 700 nautical miles from Angola (3,900 km) Year: 1991 Amount of Oil Spilled in Gallons and Liters: 82 million gallons (310 million liters)6) Nowruz Field Platform Location: Persian Gulf Year: 1983 Amount of Oil Spilled in Gallons and Liters: 80 million gallons (303 million liters)7) Castillo de Bellver Location: Saldanha Bay, South Africa Year: 1983 Amount of Oil Spilled in Gallons and Liters: 79 million gallons (300 million liters)8) Amoco Cadiz Location: Brittany, France Year: 1978 Amount of Oil Spilled in Gallons and Liters: 69 million gallons (261 million liters)9) MT Haven Location: Mediterranean Sea near Italy Year: 1991 Amount of Oil Spilled in Gallons and Liters: 45 million gallo ns (170 million liters)10) Odyssey Location: 700 nautical miles (3,900 km) off of Nova Scotia, Canada Year: 1988 Amount of Oil Spilled in Gallons and Liters: 42 million gallons (159 million liters)11) Sea Star Location: Gulf of Oman Year: 1972 Amount of Oil Spilled in Gallons and Liters: 37 million gallons (140 million liters)12) Morris J. Berman Location: Puerto Rico Year: 1994 Amount of Oil Spilled in Gallons and Liters: 34 million gallons (129 million liters)13) Irenes Serenade Location: Navarino Bay, Greece Year: 1980 Amount of Oil Spilled in Gallons and Liters: 32 million gallons (121 million liters)14) Urquiola Location: A Coruà ±a, Spain Year: 1976 Amount of Oil Spilled in Gallons and Liters: 32 million gallons (121 million liters)15) Torrey Canyon Location: Isles of Scilly, United Kingdom Year: 1967 Amount of Oil Spilled in Gallons and Liters: 31 million gallons (117 million liters)These were some of the largest oil spills to take place around the world. Smaller oil spills that have been equally as damaging have also taken place throughout the late 20th century. For example, the Exxon-Valdez oil spill in 1989 was the largest spill in United States history. It occurred in Prince William Sound, Alaska and spilled around 10.8 million gallons (40.8 million liters) and impacted 1,100 miles (1,609 km) of coast. To learn more about large oil spills visit NOAAs Office of Response and Restoration.References Hoch, Maureen. (2 August 2010). New Estimate Puts Gulf Oil Leak at 205 Million Gallons - The Rundown News Blog - PBS News Hour - PBS. Retrieved from: https://web.archive.org/web/20100805030457/pbs.org/newshour/rundown/2010/08/new-estimate-puts-oil-leak-at-49-million-barrels.html National Oceanic and Atmospheric Administration. (n.d.). Incident News: 10 Famous Spills. Retrieved from: incidentnews.gov/famousNational Oceanic and Atmospheric Administration. (2004, September 1). Major Oil Spills - NOAAs Ocean Service Office of Response and Restoration. Retrieved from: http://response.restoration.noaa.gov/index.phpTelegraph. (2010, April 29). Major Oil Spills: The Worst Ecological Disasters - Telegraph. Retrieved from: telegraph.co.uk/earth/environment/7654043/Major-oil-spills-the-worst-ecological-disasters.htmlWikipedia. (2010, May 10). List of Oil Spills- Wikipedia the Free Encyclopedia. Retrieved from: http://en.wikipedia.org/wiki/List_of_oil_spills

Sunday, October 20, 2019

Logistics Behind US Federal Regulations

Logistics Behind US Federal Regulations Federal regulations are specific details directives or requirements with the force of law enacted by the federal agencies necessary to enforce the legislative acts passed by Congress. The Clean Air Act, the Food and Drug Act, the Civil Rights Act are all examples of landmark legislation requiring months, even years of highly publicized planning, debate, compromise and reconciliation in Congress. Yet the work of creating the vast and ever-growing volumes of federal regulations, the real laws behind the acts, happens largely unnoticed in the offices of the government agencies rather than the halls of Congress. Regulatory Federal Agencies Agencies, like the FDA, EPA, OSHA and at least 50 others, are called regulatory agencies  because they are empowered to create and enforce rules regulations that carry the full force of law. Individuals, businesses, and private and public organizations can be fined, sanctioned, forced to close, and even jailed for violating federal regulations. The oldest Federal regulatory agency still in existence is the Office of the Comptroller of the Currency, established in 1863 to charter and regulate national banks. The  Federal Rulemaking Process The process of creating and enacting federal regulations is generally referred to as the rulemaking process. First, Congress passes a law designed to address a social or economic need or problem. The appropriate regulatory agency then creates regulations necessary to implement the law. For example, the Food and Drug Administration creates its regulations under the authority of the Food Drug and Cosmetics Act, the Controlled Substances Act and several other acts created by Congress over the years. Acts such as these are known as enabling legislation, because the literally enable the regulatory agencies to create the regulations required to administer enforce them. The Rules of Rulemaking Regulatory agencies create regulations according to rules and processes defined by another law known as the Administration Procedure Act (APA). The APA defines a rule or regulation as... [T]he whole or a part of an agency statement of general or particular applicability and future effect designed to implement, interpret, or prescribe law or policy or describing the organization, procedure, or practice requirements of an agency. The APA defines rulemaking as†¦ [A]gency action which regulates the future conduct of either groups of persons or a single person; it is essentially legislative in nature, not only because it operates in the future but because it is primarily concerned with policy considerations. Under the APA, the agencies must publish all proposed new regulations in the Federal Register at least 30 days before they take effect, and they must provide a way for interested parties to comment, offer amendments, or object to the regulation. Some regulations require only publication and an opportunity for comments to become effective. Others require publication and one or more formal public hearings. The enabling legislation states which process is to be used in creating the regulations. Regulations requiring hearings can take several months to become final. New regulations or amendments to existing regulations are known as proposed rules. Notices of public hearings or requests for comments on proposed rules are published in the Federal Register, on the Web sites of the regulatory agencies and in many newspapers and other publications. The notices will include information on how to submit comments, or participate in public hearings on the proposed rule. Once a regulation takes effect, it becomes a final rule and is printed in the Federal Register, the Code of Federal Regulations (CFR) and usually posted on the Web site of the regulatory agency. Type and Number of Federal Regulations In the Office of Management and Budgets (OMB) 2000 Report to Congress on the Costs and Benefits of Federal Regulations, OMB defines the three widely recognized categories of federal regulations as: social, economic, and process. Social regulations: seek  to benefit the public interest in one of two ways. It prohibits firms from producing products in certain ways or with certain characteristics that are harmful to public interests such as health, safety, and the environment. Examples would be OSHA’s rule prohibiting firms from allowing in the workplace more than one part per million of Benzene averaged over an eight hour day and the Department of Energy’s rule prohibiting firms from selling refrigerators that do not meet certain energy efficiency standards. Social regulation also requires firms to produce products in certain ways or with certain characteristics that are beneficial to these public interests. Examples are the Food and Drug Administration’s requirement that firms selling food products must provide a label with specified information on its package and Department of Transportation’s requirement that automobiles be equipped with approved airbags. Economic regulations: prohibit  firms from charging prices or entering or exiting lines of business that might cause harm to the economic interests of other firms or economic groups. Such regulations usually apply on an industry-wide basis (for example, agriculture, trucking, or communications). In the United States, this type of regulation at the federal level has often been administered by independent commissions such as the Federal Communications Commission (FCC) or the Federal Energy Regulatory Commission (FERC). This type of regulation can cause economic loss from the higher prices and inefficient operations that often occur when the competition is restrained. Process Regulations: impose administrative or paperwork requirements such as income tax, immigration, social security, food stamps, or procurement forms. Most costs to businesses resulting from program administration, government procurement, and tax compliance efforts. Social and economic regulation may also impose paperwork costs due to disclosure requirements and enforcement needs. These costs generally appear in the cost for such rules. Procurement costs generally show up in the federal budget as greater fiscal expenditures. How Many Federal Regulations are There? According to the Office of the Federal Register, in 1998, the Code of Federal Regulations (CFR), the official listing of all regulations in effect, contained a total of 134,723 pages in 201 volumes that claimed 19 feet of shelf space. In 1970, the CFR totaled only 54,834 pages. The General Accountability Office (GAO) reports that in the four fiscal years from 1996 to 1999, a total of 15,286 new federal regulations went into effect. Of these, 222 were classified as major rules, each one having an annual effect on the economy of at least $100 million. While they call the process rulemaking, the regulatory agencies create and enforce rules that are truly laws, many with the potential to profoundly affect the lives and livelihoods of millions of Americans. What controls and oversight are placed on the regulatory agencies in creating federal regulations? Control of the Regulatory Process Federal regulations created by the regulatory agencies are subject to review by both the president and Congress under Executive Order 12866 and the Congressional Review Act. The Congressional Review Act (CRA) represents an attempt by Congress to re-establish some control over the agency rulemaking process. Executive Order 12866, issued on Sept. 30, 1993, by President Clinton, stipulates steps that must be followed by executive branch agencies before regulations issued by them are allowed to take effect. For all regulations, a detailed cost-benefit analysis must be performed. Regulations with an estimated cost of $100 million or more are designated major rules, and require completion of a more detailed Regulatory Impact Analysis (RIA). The RIA must justify the cost of the new regulation and must be approved by the Office of Management and Budget (OMB) before the regulation can take effect. Executive Order 12866 also requires all regulatory agencies to prepare and submit to OMB annual plans to establish regulatory priorities and improve coordination of the Administrations regulatory program. While some requirements of Executive Order 12866 apply only to executive branch agencies, all federal regulatory agencies fall under the controls of the Congressional Review Act. The Congressional Review Act (CRA) allows Congress 60 in-session days to review and possibly reject new federal regulations issued by the regulatory agencies. Under the CRA, the regulatory agencies are required to submit all new rules the leaders of both the House and Senate. In addition, the General Accounting Office (GAO) provides to those congressional committees related to the new regulation, a detailed report on each new major rule.

Saturday, October 19, 2019

Diversity in the media Essay Example | Topics and Well Written Essays - 1250 words

Diversity in the media - Essay Example They may derive from any number of aspects of the communication content. â€Å"They may be considered as psychological or political or economic or sociological. They may operate upon opinions, values, information levels, skills, taste, or overt behavior† (Heibert, 2001). According to Don Rojas (2002), â€Å"News organizations help shape the perceptions of millions and, through these influences, even determine the destiny of our people. The media can either tell our stories accurately or misrepresent our experiences.† One media organization that is dedicated to reporting the news of the day with an alternative slant that purposely calls into question the one-sided viewpoint typically presented in more traditional programs is The Daily Show with Jon Stewart. Although it doesn’t at first seem to have any particular slant toward a gendered or racial audience, closer examination of the text and its associated advertising will reveal that this show is geared mostly to the traditional WASP (white anglo-saxon protestant) upwardly-mobile male viewer. The Daily Show is a 30-minute late night television program that airs each weeknight and bases its humor on the news events of the day, occasionally making biting observations on policymakers and other issues. It takes a decidedly and unapologetic liberal stance to the more conservatively reported news and events reported elsewhere. In the show, a single ‘news anchor’, Jon Stewart, recaps the headline news stories of the day through the use of actual news footage, previously taped field interviews, in-studio guest appearances and live coverage of events when possible. The show employs approximately five other individuals, most of them male, who are placed in a ‘reporter’ type position to cover in-field interviews or other features of the show. It is produced by Jon Stewart and Ben Karlin with co-executive

Friday, October 18, 2019

Answering a specific question from the novelTo Kill a Mockingbird by Essay

Answering a specific question from the novelTo Kill a Mockingbird by Harper Lee - Essay Example ted by Jem, that he was â€Å"about six feet tall, judging from his tracks; he dined on raw squirrels and any cats he could catch, that’s why his hands are bloodstained†¦There was a long jagged scar that ran across his face; what teeth he had were yellow and rotten, his eyes popped and he drooled most of the time.† But was he really a monster who deserved to be imprisoned? Since there was no clear indication whether Boo was insane or not, we cannot easily define the purpose for his imprisonment. What we know is that his father does not want him in the asylum. He might not have been crazy during the earlier days but who wouldn’t go crazy if you are kept from the outside world for decades? This was understood by Gill when he asked his friends how they would feel â€Å"if you’d been shut up for a hundred years with nothing but cats to eat?† This implied that the people really believed he was crazy. But crazy or not, it was just plain mean to imprison someone in the house. If he was crazy, it was actually better if he was left in the asylum for treatment. And there, he would have the appropriate environmental and social conditions needed by those people with disorder. With this, I don’t think there was any good thing about his father and brother’s way of protecting him, if that was what they insist on doing. And I agree with Scout when she understood â€Å"why Boo Radley’s stayed shut in his house all this time†¦it’s because he wants to stay inside† to escape the horrible things that the townspeople can do to each

Witchcraft Research Paper Example | Topics and Well Written Essays - 500 words

Witchcraft - Research Paper Example People also get reminded of old hags when witches are being talked about. The image that people have of witchcraft has changes significantly in the recent past, this drastic change has occurred because of numerous movies made on this subject, several books have also been published on witchcraft which has again helped in changing the perception of the people about witchcraft. This paper will shed light upon witchcraft and paganism since 1815; modern day witchcraft will be discussed extensively in the following parts of this paper. â€Å"Before really getting into what Witchcraft is, perhaps we should take a look back at what it was—the history of it. Witches should be aware of their roots; aware of how and why the persecutions came about, for instance, and where and when the re-emergence took place. There is a great deal to be learned from the past. It's true that much of history can seem dry and boring to many of us, but that is far from so with the history of Witchcraft. It is very much alive and filled with excitement.† (Buckland, Raymond. P.1) Witches have often been banished from our society, they have been tortured beyond imagination and this image has to be changed.

Thursday, October 17, 2019

Article Questions Essay Example | Topics and Well Written Essays - 500 words

Article Questions - Essay Example law is not automatically recognized and enforceable in other countries. By ignoring the realities of this important point, the authors therefore were unable to thresh out all the concerns surrounding their arguments that would make their thesis viable. Facts are concrete data or pieces of information that are observable with the senses or verifiable and capable of corroboration from authoritative sources such as books, encyclopedias, or institutional databases. Facts that are verified are held to be absolute truths that are incontestable. Opinions, on the other hand, are products of subjective perceptions and personal judgements, founded on the individual’s values, which may or may not ring true with other people with a different set of values and different perspectives. Opinions may be validated if sufficient facts are offered in support of the opinion. Without facts, however, the opinion remains invalidated and lacking in authority sufficient to be relied upon. In the article given, the authors’ assumptions cannot be called valid. Assumptions by themselves are suppositions and require support in terms of facts and other authorities, in order to be accorded credence. This article fails to give any manner of numeric or qualitative data to support its assumptions, nor does it cite authoritative sources for them. (2) The article is likewise biased against internet intermediaries, and takes the fact that they profit from any transaction they facilitate as their motivation for encouraging illegal online activity. Together, these two measures are immediately applicable and may prove to be more effective than a lengthy legislative process that cannot only have indirect effect on individuals. It takes time before implementing regulations that give effect to laws are perfected, by which time the nature and tactics of internet crime may have evolved to a

Literature Review about Social Media - Facebook, Twitter, Instragram

About Social Media - Facebook, Twitter, Instragram. SEO and PPC and Tools to do with Social Media - Literature review Example In general, the websites of the organisations are made up of either with the integration of Pay-Per-Click (PPC) or Search Engine Optimisation (SEO) (Distilled, 2012). Keeping with the changing trend in the current social media campaigns, the paper intends to critically discuss about the tools that can be used by WordPress website with social media campaign involving Facebook, Twitter and Instagram. According to the various studies and analytical research surveys, it has been critically recognised that there are fundamentally two types of tools used to develop an effective social media campaign of an organisation. According to Odden (2012), SEO and PPC are the major tools of an effective social media campaign. The tools have been widely accepted and implemented to streamline the promotional strategies of the organisations’ websites along with different products and/services offered by any particular organisations (Odden, 2012). According to Chaney (2009) the concept of SEO is simply defined as a bit of moving target, which involves number of influences on the websites of a particular organisation. The best practice of a SEO significantly incorporates a mix of attention towards the content, keywords, social signals and links associated with the organisational websites to be promoted by the use of social media networks (Chaney, 2009). Moreover, the study of Eid & Ward (200 9) also suggest that an effective practice of SEO also involves certain other crucial factors such as speed of the web page along with semantic mark-up and authority of the author. This process enables greater benefit to the organisations to maintain efficiency of the websites and also provides adequate support to promote range of organisational products/services to a wide number of global clients (Eid & Ward, 2009). In relation to the concept of PPC, the study of Distilled, (2012); Ellam (2004) have critically

Wednesday, October 16, 2019

Article Questions Essay Example | Topics and Well Written Essays - 500 words

Article Questions - Essay Example law is not automatically recognized and enforceable in other countries. By ignoring the realities of this important point, the authors therefore were unable to thresh out all the concerns surrounding their arguments that would make their thesis viable. Facts are concrete data or pieces of information that are observable with the senses or verifiable and capable of corroboration from authoritative sources such as books, encyclopedias, or institutional databases. Facts that are verified are held to be absolute truths that are incontestable. Opinions, on the other hand, are products of subjective perceptions and personal judgements, founded on the individual’s values, which may or may not ring true with other people with a different set of values and different perspectives. Opinions may be validated if sufficient facts are offered in support of the opinion. Without facts, however, the opinion remains invalidated and lacking in authority sufficient to be relied upon. In the article given, the authors’ assumptions cannot be called valid. Assumptions by themselves are suppositions and require support in terms of facts and other authorities, in order to be accorded credence. This article fails to give any manner of numeric or qualitative data to support its assumptions, nor does it cite authoritative sources for them. (2) The article is likewise biased against internet intermediaries, and takes the fact that they profit from any transaction they facilitate as their motivation for encouraging illegal online activity. Together, these two measures are immediately applicable and may prove to be more effective than a lengthy legislative process that cannot only have indirect effect on individuals. It takes time before implementing regulations that give effect to laws are perfected, by which time the nature and tactics of internet crime may have evolved to a

Tuesday, October 15, 2019

Law Essay Example | Topics and Well Written Essays - 1750 words - 2

Law - Essay Example The development of each of these areas of law would be discussed in turn and any similarity as well as difference would looked into so as to make an effective comparison between the two difference applications that have been provided for that is one by way of statute and the other would be that of the rule of Wheeldon v. Burrows and the cases that have effectively developed the rule and applied the provision. Easements are where a benefit is provided to the dominant tenement that is the land which benefits from the easement, which provides the person who owns the dominant tenement of land to use the easement. The second element in respect of an easement is the based on the fact that since there is a benefit that is accruing there is a burden on what is known as the servient tenement or in other words the land that has been burdened by the easement. A vital principle related to an easement is the fact that it is a proprietary interest and the accruing benefit and burden, subject to th e laws of registered and unregistered land, transfer, if the land that is either the servient or dominant tenement is transferred to another person. (Cursley et al 2009) The creation of an easement is dependent upon the satisfaction of a criterion that had been laid down in Re Ellenborough Park1 which are generally referred to when determining the existence of an easement. The first and foremost requirement is the fact that there must be a dominant and servient tenement thus eliminating the possibility and stating that the easement cannot exist in gross. (Hawkins v. Rutler)2. The second requirement is the fact that the dominant and servient tenement’s occupation and ownership must be by different persons (Roe v. Siddons)3. However, according to Wright v. Macadam4 the occupation by different persons would allow an easement to be created. The Third element is the fact the easement must benefit the dominant tenement and this is dependent upon the proximity of the servient teneme nt; it also been stated that the advantage should not be purely personal (Hill v Tupper); and the right must not that be of a recreational user. The fourth criterion is that the easement that has been alleged must be capable of formation of subject matter of a grant. Case law has developed upon the criterion and has provided guidelines in this respect, the first one being that there must be a capable grantor, which is clear in the facts at hand, the second that there must be a grantee which is evident because the tenants were granted the rights; thirdly the subject matter of grant is sufficiently certain, which is clear enough in respect of the facts that is the right to cross; and finally the right must be capable of being called an easement that is it is covered under the rights which have been recognized to be easements, which has been done in respect of the right to cross. The final factor that has not been expressly listed down in the case was that of public policy which is con sidered when determining whether an easement is existent or not. (Grey et al 2006) The next aspect that is considered is that easement can be existent either legally or under equity as laid down under section 1 of the Law of Property Act (LPA) 1925. (Cooke 2006) As far as legal easements are considered there are a number of formalities that need to be fulfilled. The first requirement is that for a legal easement there must either be a fee simple absolute in possession or as an adjunct to a term of years (section 1 Law of Property Act 1925). Secondly easements can only be legal if created by way of statute, by prescription, by deed or registered disposition. All other easement are equitable in nature. (Dixon 2004) As far easement by prescription is Law Essay Example | Topics and Well Written Essays - 1500 words Law - Essay Example Unfortunately, even in 2012, until more research is conducted to collect data on duration of street bail, Hucklesby’s claims remain valid. Street bail was introduced in the British legal system in 2003. The amendment came into effect in 2004.1 Street bail was designed to speed up justice in the British legal system by enabling officers to spend more time collecting evidence, and less on bringing the suspect in the police station to bail him or her out a few minutes later.2 There were estimates in 2004 that the new bail system would be economical, as it would provide additional 390,000 hours of police officers’ time annually to focus on investigating the crimes.3 Guidance on Street Bail was implemented in 2006. The guide aimed to direct implementation of the Sections 30A to 30D of the Police and Criminal Evidence Act 1984 (PACE), as amended by Section 4 of the Criminal Justice Act 2003. 4 While making a decision whether to bring the offender in or not, the police officer must consider following facts: whether the offender has a history of violating the bail, whether the offender could jeopardize the evidence crucial to the judicial system if left free, whether the offender could continue offending if left free, and whether data are correct regarding the address of the offender and the nature of the offense. 5 In Northern Ireland, an equivalent document was published as well.6 However, Hucklesby argues that the pre – charge bail system only discourages justice. The nature of the offense, or the ability to jeopardize evidence, is left to the interpretation of the police officer. As a result, Hucklesby argues, more arrests will take place, instead of fewer.7 Moreover, in cases where police officers will not be willing to pursue the investigation, the offender will not be turned in.8 Cape too agrees with Hucklesby’s arguments, due to the inexperience of the arresting officers and a low threshold for arrest and long bail periods, where sus pects will not be able to present their own story.9 Some argue otherwise. There are arguments that even in the light of the new approach to bail, PACE â€Å"continues to use its ‘fundamental balance’ approach,†10 which was abused in the past. PACE’s approach is to protect the rights of the suspect, while allowing for the police officers to gather enough evidence to identify the offender.11 One of its aims is also to decrease detention time. 12 A famous case portraying the misuse of power before the street bail on behalf of law enforcement officers is the Birmingham pub bombings, where six suspects were wrongfully convicted.13 The suspects were treated outside their protection system and tortured.14 Moreover, they were interrogated partly also outside of the police station, which violates the rules of PACE.15 The new approach to bail on street attempts to avoid such problems through allowing suspects freedom while conducting investigation. However, the powe r remains in hands of the arresting police officers. Though PACE aims to decrease the detention time, Skinns has found evidence that detention time has been increasing back to the pre – PACE level.16 In 1986, the mean detention time was over four hours, whereas in 1990 – 3 it increased to over six hours. 17 In 1979, before PACE, the mean detention time was over ten hours. 18 Moreover, police investigation is still a problem. Skinns found that gathering evidence is still a problem in the British criminal system, and it rests with â€Å"

Theory of Writing Essay Example for Free

Theory of Writing Essay Writing varies from a text message to a novel. Writers often have a difficult task in creating a piece of work that truly identifies the meaning of good writing. Every good writer usually starts with the basics such as genre, audience, rhetorical situation, and reflection of the piece. Throughout this semester, we have gone through all of these key terms in great detail with each new assignment that has come our way. In doing this, not only as students but also as writers, we have come to create our own theory of writing. Every writer has a different theory of writing though most are very similar. Now, at this point in the semester after doing countless journals, in-class exercises, and final assignments, I think I have figured out my own theory of writing. Theory of writing to me after all of these assignments is still a grey area but I can pick out main points of it. Theory of writing defined by me involves three main points. The first thing is how a writer does his or her best work. For instance, I like to do my writing at night when there is peace and quiet, almost to where I can hear aloud my own thoughts. Secondly, the theory includes what the writer does in planning. My planning includes no planning. I sit at my computer and just start typing all my thoughts on the screen until I do not feel like typing anymore. After that is done I usually cut the fat and revise all of my work. Lastly, I believe that the theory of writing process involves having one main goal in mind supported by smaller â€Å"sub-goals. † Like for example when doing my research essay on concussions in the NFL. I had the main topic of explaining concussions in the NFL with smaller â€Å"ingredients† helping me explain like the hits on a defenseless player rule and countless other ingredients to help me create my ultimate â€Å"burrito. Also in my theory of writing, I have learned to accept the four key terms (audience, genre, rhetorical situation, and reflection) as important concepts to keep in mind while writing all of the major assignments. Each term has a different meaning to me and I have learned more and more throughout this semester after each assignment. The very first assignment introduced me to these terms, where I still did not know exactly what they meant but I had a general idea. Look more:  process analysis essay I learned that genre affects what is being ritten because it sets the stage for what should be done and what readers expect by picking up the writing. Writers may go into writing a piece of art by combining a few genres but always have one genre that will shadow over the others. For example Martin Luther King Jr. ’s piece, â€Å"A Letter from Birmingham Jail,† has a specific genre, which is even stated in the title as a letter. King Jr. ’s piece can also be looked at as a persuasive essay because he is trying to convince his point of equality to the clergymen of Birmingham. When speaking of genre, I also have to incorporate audience because these two terms come hand in hand. A writer’s audience is the readers expectations of what they are going to be reading. Each genre usually has a specific audience. In the King Jr. speech, his initial audience was for the clergymen of Birmingham. Just like genre, there can be multiple audiences for one piece. King Jr. was also talking to the people who supported his equality point by saying we have waited too long for a change and need to act now. Rhetorical situation on the other hand was probably and still is the hardest key term for me to understand. I learned that Rhetorical situation is the circumstance in which you communicate. This involves the writer’s personal factors, the purpose of the writing, the genre, the audience, the topic, and the context for which you are writing. A writer’s personal factors include his or her background such as beliefs (religious or political), where they were raised, how they were raised, life experiences, etc. The purpose of writing is why as a writer you even started to write a certain piece. For instance the reason why I am writing this assignment along with the other three assignments is because I would like to receive a grade for my work. While doing this, I am also learning how to write better as a young adult going into the mature world where writing is key component in everyday life. Lastly, reflection is usually conceived as an after thought. Reflection is known to be more personal to the writer. This is good for a writer to link personal experiences into the writing. Not only is the author reflecting but also the audience too by creating a mental image in their head. Everyone will not have the same reflection because each individual thinks differently about various topics. For instance in the magazine article â€Å"Is Google Making us Stoopid? † by Nicholas Carr. Carr explains that Google is something you are researching rather than learning. In the article, Carr explains his reasoning like â€Å"When the Net absorbs a medium, that medium is recreated in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration. † This quote makes me as a reader imagine someone clicking off the screen to another window where the email site is. It also makes me reflect on when I may have done this sort of action. I actually just minimized this paper because I just heard a Facebook chat message come in. Some people may not get distracted at all while reading. But this example shows how reflection comes about. Readers may not know this while reading, but most relate personal experiences while reading a certain piece. After learning these key terms in the first assignment, I tried applying this part of my theory to the rest of the assignments throughout the semester. During my research essay, I found myself thinking about my broad topic, concussions in the NFL. The NFL has such a big fan base that I did not know how to satisfy all of them. So my research essay was mainly for adults who do not know the science part of concussions and also doctors who do not know what football is trying to do about concussions. After finding out my genre and audience, I was able to use rhetorical situation and reflection in my paper as well by putting some of my own experiences with football into the paper. Overall, the research assignment though did not help me think about my theory of writing too much because I just looked at the assignment as another paper. On the other hand, the genre composition assignment opened my eyes wide open to my own theory of writing. The genre project was very different but help me realize how important genre and audience coherency really is. Like I established earlier, football has such a large audience that it is hard to incorporate all in one genre. So with three genres at hand, I was able to get almost every part of that audience. I broke the football audience into three groups: children, young adults, and adults. For each part of the audience, I had to think of a distinct genre to fit that audience. I ended up creating a video script for children, a magazine article for young adults, and a brochure for adults. Thinking of the different genres for the different audiences was not hard, but actually creating the different genres was difficult. It was difficult because as the writer I needed to think about the particular audience I was writing for. For instance when composing the script for the informational commercial, I needed to think about the children and what they would see when watching this since they were my target audience. Therefore, I need to write the script so a child can listen in and understand the message I am trying to get across about head injuries in football. I had to do this with all my audiences. That is why I think audience is the most important part of my writing theory. Coming into this class, rhetorical situation may have been the strongest part of my writing theory because I thought no matter what, you should always put some part of yourself in the writing. When doing the genre composition project, I felt like I did not put any of my own experiences into the writing so my previous theory had been compromised! Since I have come up with my own theory, I can now apply it to other writing assignments in school and also the real world. Writing is an essential part to the working world, and if one does not know how to write properly, they will not go far. I am going to be training to be a firefighter in the summer and next year while going to school. I want to ultimately work as a firefighter, which many would think does not involve a lot of writing. It actually does. Whenever the fire department is called out to a scene, a firefighter has to take a report of the scene and who was involved. If done improperly, the firefighter may be fired. Also when a firefighter moves up in rank to the lieutenant and captain positions, paperwork is their life. That is why I am in college now to learn essential skills that can be applied to firefighting and to also have a backup plan if firefighting does not work out. My theory of writing has grown throughout this semester and it will continue growing all my life until I die. It does not stop with this class, though this class has taught me a lot about certain key terms and has opened my eyes to new theories. I will add on to my theory of writing as I grow as a writer.

Monday, October 14, 2019

Distortion effect for electric guitar

Distortion effect for electric guitar Distortion Effect For Electric Guitar Using FPGA Introduction Project Goals And Objectives The goal of the project is to implement distortion effects for electric guitar on an FPGA board. The algorithm that is going to be used is The Extended Karplus Strong Algorithm (Jaffe Smith, 1983). The analog audio signal from the electric guitar is captured by the analog to digital converter (ADC) module of the board. The FPGA is going to send the digital audio signal to a speaker to be played. The algorithm is going to be implemented on FPGA instead of using ASIC design approach. The pros and cons of FPGA design and ASIC design are discussed on the Xilinx website. The design advantage comparison of FPGA and ASICand the design flow comparison of FPGA and ASIC (Xilinx Corporation, 2009). ASIC design has more steps to complete as can be seen . Also, it is suitable for very high volume designs. For a single unit, using FPGA is a better solution. FPGA has no upfront non recurring expenses. It is faster to implement. Manufacturing of ASIC design chips take long time. However, a design can be downloaded to the FPGA and programmed very fast. Considering all these, using FPGA design is more suitable for this project. Project Deliverables The deliverables include the Verilog HDL code of the design. It is going to be synthesizable and can be used with suitable FPGA boards. The final project report is going to be delivered. It is going to include the details of the hardware algorithm, the design process and the results obtained from the functional verification and the hardware validation of the system. A demonstration of the project is doing to be done with the developed prototype of the system. The electric guitar is going to be the input of the system. The output from the board is going to be played through speakers. Technology Trends Before the invention of FPGAs, CPLDs (Complex Programmable Logic Device) were the most complex programmable logic devices. And before CPLDs, PALs (Programmable Array Logic) were used frequently. PALs were introduced in March 1978 by Monolithic Memories, Inc. They are only one time programmable. PALs are consisted of PROMs (programmable read-only memory). They were mostly used in minicomputers. These devices have fixed OR and programmable AND arrays. This enables the implementation of sum of products logic. A simplified programmable logic device. Typically, PAL devices have a few hundred gates. CPLD devices have higher complexities compared to PAL devices. They have similar features to both PAL devices and FPGAs. Like PALs, they dont have external ROMs, which enable the CPLDs to start functioning just after startup. They have much higher number of gates compared to PAL devices. They have around thousands to tens of thousands of gates. However, this is low compared to FPGAs, since the number of gates inside the FPGAs can go up to a few millions. FPGAs have the most number of gates and flip-flops compared to the others. They are more flexible but their design is more complex. The first distortion effect for electric guitar wasnt produced on purpose. It was mostly caused because of damaged guitar amplifiers. One example was a recording by Johnny Brunette Trio, which caused a fuzz tone effect. (The Train Kept Rollin, 2009). Electronic based distortion and overdrive effects came to scene in 1960s and 1970s. The effects were achieved by diodes, transistors and amplifiers and most of these pedals were analog. With the improvement in the digital signal processing techniques, digital processors became an important part of the technology in the last decade. Market Research The digital products in the market nowadays feature more adjustable effects than just a distortion effect. Typically, they have parallel effect modules that can run simultaneously. They also have advanced software. They have preset tones and effect libraries, tuners and even more features. Also, most of them have USB interfaces with a PC or MAC for compatible recording software. So, the projects features arent going to be able to match the products features in the market. Boss, Line 6, Zoom, Korg, Digitech are among the major companies which produce digital guitar effects processors. The bestselling multieffect electric guitar processors on Amazon.com. It can be observed that Zoom and Digitech have the most market. Requirements Functional Requirements The electric guitar will be connected to the FPGA boards analog to digital converter input. The analog to digital converter is going to convert the incoming analog signal to an 8-bit digital signal. The sampling frequency is going to be 44100 Hz, which is the standard for most of the digital audio files. The reason for choosing this sampling frequency is the human ears ability. The human ear cannot perceive frequencies above 20 KHz. According to the Nyquist Sampling Theorem, a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency of the signal. If the highest frequency that the human ear can perceive is considered to be 20 KHz, anything above 40 KHz is going to be enough for sampling frequency (Schulzrinne, 2008). The signal is going to be processed inside the FPGA using The Extended Karplus Strong Algorithm (Jaffe Smith, 1983). The processing should be fast enough so that the human ear cannot understand the delay between the time when the player hits a note on the guitar and the time that the output is played by the speakers. After the processing, the 8-bit signal is going to be converted to analog. Finally, this analog signal is going to be sent to the speakers and played. The hardware functionality that the system is going to provide. Nonfunctional Requirements The most of important constraint on the system will be the time constraint. The delay between the input and output audio signals must be minimized. This requires the design to be fast. For this purpose, the resources available on the FPGA should be used efficiently. The most of important constraints on the timing of the design is going to occur due to the algorithm. Floating-point arithmetic might be needed to use according to the algorithm. This might cause the calculations to take longer. Also, another constraint on the system is the speed of the FPGA. The speed of the FPGA is not going to cause a problem for sampling the incoming analog audio signal. However, the speed of the FPGA is going to put a constraint on the speed of the algorithm. A pipelined algorithm might be used in order to satisfy the requirements for the speed and the timing of the system. There are going to be feedback loops, filters and saturator blocks in the system. So, a pipelined algorithm is going to increase the utilization of these blocks and this is going to result in the increase in the throughput. If there is a pipelined algorithm, more resources are going to be needed to implement the pipelined system. The limited amount of the resources such as memory blocks and arithmetic units might put a constraint on the design. Also, another constraint is going to be the data width of the ADC and DAC. Due to the limited number of bits on ADC and DAC, the quality of the digital audio signal is going to be limited. Product Requirements Analysis The product requirement analysis is done using Quality Function Deployment (QFD) technique. The most important criteria for customer satisfaction are low delay time and distortion effect level. Also, good sound quality is very important too. Implementation of additional effects is the least important feature of the product. Low power consumption, low cost, effect adjustability, good bass and treble sounds, good feedback are also expected to have good standards by the customer. In order to meet the customer expectations, most important step is choosing the distortion effect algorithm correctly. The use of external resources should be kept to minimum level in order to meet the speed requirements of the system. Any use of external memory is going to cause additional memory access time and cause the system to function slower. This is going to result in an unwanted delay time. Bit resolution is also important. It is going to affect the sound quality. The higher number of bits is going to increase the quality. It might also help us get rid of using floating point arithmetic for implementing the saturation algorithm. However, the higher number of bits might cause a problem with the pipeline implementation. Project Requirements FPGA has to capture the analog sound and this signal is going to come from the output of an electric guitar. The FPGA board that is going to be used is chosen to be Spartan-3A Starter Kit board because of its built in analog to digital converter and digital to analog converter modules. The board also has a stereo mini jack for audio. These features make this board very suitable for audio processing and thus, very suitable for this project. Also, the FPGA chip has 700 K gates (Xilinx Corporation, 2009). In order to play the output, stereo speakers are going to be connected to the board pin to which the output signal is connected. The design is going to be done in register transfer level (RTL). The RTL design of the system is going to be described using Verilog HDL. In order to do this, Xilinxs design tool Xilinx ISE Webpack 11.3, which is free a program, is going to be used. Before prototyping the system, functional verification has to be completed successfully. For this purpose, Modelsim, which is develop by Mentor Graphics, is going to be used. Before starting the hardware design of the system, the algorithm is going to be simulated and verified using functional blocks in MATLAB Simulink. The hardware requirements for the system are and the software requirements for the system. Design Architecture As discussed earlier, the algorithm that is going to be used is The Extended Karplus Strong Algorithm (Jaffe Smith, 1983). The algorithm extensively uses filters. The algorithm is modeled and simulated under MATLAB Simulink. The model consists of functional blocks. The filters are defined by their discrete transfer functions. There is also a feedback loop. The sound is amplified by a gain block and passes through a saturation block. The saturation block basically causes the signal to saturate if its amplitude goes over or below specific thresholds. So, the higher the signal is amplified by the gain block, the more the signal is going to get distorted; since it is going to be saturated from lower amplitude compared to its new peak value. The model of The Karplus Strong Algorithm. Since there are consequent filter blocks, the signal is going to be delayed. To overcome the problem, the level of parallelism should be increased. Since there are 20 block RAMs in the FPGA, these can be used for increasing the pipeline depth and the level of parallelism. When an 8-bit sample passes though the first filter, it is going to go to the second one. Instead of waiting and doing the second operation using the same hardware, we should maximize the use of the resources and send the data that passed through the first filter to another resource. During that time, the other sample can pass through the first filter. Usage of block RAMs might be very beneficial here, in order to increase the throughput and the speed of the system. Since the data that is going to be processed isnt going to be large, only the internal block RAMs might be enough. Also, use of an external RAM is going to put more delay on the line because of the longer memory access time. This is highly undesirable since the most important criterion for the system is its speed. Structure The system consists of four main parts. First part is where the user interacts with the system. The user is going to generate an output from the guitar and that output is going to be captured by the FPGA board. FPGA is going to the process the output and pass it to the third part of the system, speakers. The stereo output is going to be played by the speakers. Also, a PC is needed to send the .bit file to program the FPGA. The FPGA board. It has an audio output port on the right top. If needed, DDR2 SDRAM can be used as external memory. The analog digital circuitry is used for capturing the analog signal to the board. The circuitry has 2-channel 14-bit analog to digital converter and 4-channel 12-bit digital to analog converter. The switches can be used for turning the distortion on and off. Also, they can be used for the same purpose if additional sound effects are added to the system. Rotary knob can be used for adjusting the level of the distortion or the gain or the volume. The quantity that is going to be adjusted can be determined by the switches since there is only one rotary knob. Interface There are three interfaces in the system. The first interface is for programming the FPGA. The connection between the FPGA and the computer is going to be achieved with USB 2.0. Xilinx iMPACT tool is going to be used to program the FPGA. The second interface is for capturing the analog audio signal from the electric guitar to the FPGA board. The on board analog to digital converter is going to be used for that purpose. Analog to digital converter unit on the board. The third interface is going to be between the FPGA and the speaker. The digital signal is going to be converted to analog signal using Xilinxs digital to analog converter module and it is going to be sent to the audio jack port of the board. The stereo audio jack module. Implementation Implementation Scope As discussed in Section 3.2, the system consists of four main parts. The module for sending the .bit file from the PC to the FPGA is already given with Xilix iMPACT tool, so no implementation is required for this. The second module is the audio input to the board. This is the input module. The input module is going to be implemented with the help of on board analog to digital converter. As discussed earlier, the sampling rate and the bit resolution are the most important parts of the input module. The sampling rate is going to be 44100 Hz and the resolution is planned to be 8 bits. The captured analog signal is going to be converted to digital signal and sent to FPGA module for processing. FPGA module is going to be responsible for processing the digital signal. For faster and efficient processing, pipelined implementation is going to be done. This is going to be done using RTL description of the hardware with Verilog HDL. The output module is going to convert the processed digital signal to analog and send it to the boards audio jack port for playing the processed signal using speakers. Xilinxs DAC module is going to be used for the implementation of this module. Implementation Coverage The algorithm that is going to be used for implementation is The Extended Karplus Strong Algorithm (Jaffe Smith, 1983). The block diagram of The Extended Karplus Strong Algorithm. The output is going to be sent to gain and saturation blocks. There are filter blocks and delay blocks in the system. These functions are going to be implemented inside the FPGA. The first functional block is a pick-direction low pass filter (Smith III, Pick-Direction Lowpass Filter, 2009). The second functional block before the feedback loop is a pick-position comb filter (Smith III, Pick-Position Comb Filter, 2009). In the feedback loop, there is a delay block on the top. The other blocks are again filters. After the delay block, the signal goes through a two-zero string damping filter (Smith III, Two-Zero String Damping Filter, 2009). Before the addition operation in the feedback loop, another pick-direction low pass filter is going to be used. After the loop, there is going to be dynamic level low pass filter (Smith III, Dynamic Level Lowpass Filter, 2009). After these filters and delays, there is going to be a gain block which is used for increasing the level of distortion. Distortion effect is going to be generated by a saturation block. The saturation can use either hard clipping or soft clipping. Soft clipping has higher complexity. It is a third order polynomial. It results in a smoother sound. However, for more distorted and fuzzy sound, hard clipping is preferred. Since it has a heavier sound and is easier to implement, hard clipping is going to be used. The input-output relations of hard clipping and soft clipping Develop Or Adopt Decision The most important part for the project is the FPGA board. It is going to be adopted. If I wanted to design the circuit with a PCB design tool in which I am not experienced, I would have paid a lot of money to get it manufactured. And the design has to be perfect before getting the chip produced. The decision of choosing whether to use FPGA design or ASIC design was discussed earlier in Section. So, buying and using an FPGA board is the best option here. Spartan 3A Starter Kit is going to be used for the project. For the output interface of the design, Xilinx has a module described in Verilog and is available for free. For DAC and output purposes, that module is going to be used. If there is an available module for the input port of the system for free from Xilinx, it is going to be adopted. Otherwise, the ADC module is going to be developed according to the ADC hardware available on the FPGA board. The design on the FPGA is going to be based on an algorithm but it is going to be designed by me. Also, an electric guitar and speakers with amplifiers are needed for the project. They were already available before the start of the project. For software, Xilinx ISE, Xilinx iMPACT, Modelsim XE and MATLAB are going to be used. MATLAB is already available and the others have free versions for students. Implementation Process Three modules are going to be implemented. Each module can be implemented independently from each other. Finally, all the modules are going to be connected under a top module. DSP module is the main part of the design where the algorithm is going to be implemented. The functional verification of the design is going to be independent from the other modules. Implementation Resources The resources for implementation can be grouped into two. First, we need hardware resources. The second group is the software resources. The most important resource for hardware is the FPGA development board. Spartan 3A Starter Kit is going to be used. This specific board is chosen due to some reasons. This board is suitable for DSP applications. It has ADC and DAC modules. It also has a stereo audio jack for outputting the processed signal. So, this board is going to be used for implementation. FPGA is going to be programmed from a PC. The hardware of the system is going to be described using Xilinx ISE tool, which requires a PC. So, we also need a PC for implementation. The connection of the board with the PC required a USB cable, which is provided with the board. We also need an electric guitar and speakers. The required hardware resources for implementation. Besides the hardware resources, some software resources are going to be needed too. First, before starting writing the code for the hardware, the algorithm is going to be tested and the functional blocks are going to be made clear using MATLAB Simulink software. For synthesis and implementation, Xilinx ISE is going to be used. It is going to synthesize and implement the hardware described by Verilog HDL. It also includes Xilinx iMPACT tool which is used for sending the .bit file to the FPGA for programming. For functional verification, Xilinx Edition of Modelsim, developed by Mentor Graphics is going to be used. Implementation Activities The project group consists of only one person. So design, verification, implementation and testing are going to be done by me. During the project, additional training and study is going to be required in digital signal processing and filters. Also, digital filter design should also be studied. Another thing that needs improvement is writing testbench to verify the designed system. Testing Testing Scope The testing of the system consists of two parts. There is a functional verification part and a hardware validation part. For functional verification, Modelsim XE software is going to be used with Verilog HDL. The parts that are going to be tested are the input module, the output module, the DSP module. After the integration of the modules in order to form the system, the whole system is going to be tested. Also, the hardware validation of the DSP and output modules can be done without a working input module. A randomly generated signal in FPGA can be processed and sent to output module for playing and this can be tested. Testing Coverage As explained in Section, the modules are going to be tested individually at first. The input module is going to get an analog signal from an external source. This might be coming from the electric guitar or directly from a PC. If the input signal is coming from PC, the signal can be adjusted to be simple and therefore testing can be simpler. After the conversion, the signal is going to be observed. Also, if the output module is working, the input signal can be directly transferred to the output module without any signal processing done on it. A randomly generated signal inside the FPGA is going to be enough to test the output module. DSP module is going to be tested by functional verification. The filters, the gain and the saturation blocks are going to be tested. After these, the whole DSP module is going to be tested. An example of input and output of the system with hard clipping. Pass/Fail Criteria The pass/fail criterion for the input module is going to be its analog to digital conversion performance. If a given analog input can be correctly converted to digital signal, it is going to pass the test. Digital conversion operation with its input and expected output. The module, the expected output is going to be the signal on the bottom (Azima DLI , 2009). In order to pass the test, the module has to give the correct output for each stimulus applied. The output module has to do digital to analog conversion and send the signal to speakers. For that, a signal is going to be generated inside the FPGA. This signals amplitude and frequency is going to be changed. According to the changes, we are going to expect different outputs. The output is going to be listened through the speakers. In order to pass the test, the output module should correctly respond to every amplitude and frequency change. The DSP module is going to be tested with functional verification. A reference model is going to be constructed in behavioral level. Randomly generated stimulus is going to be applied to the design and to the reference model at the same time. In order to pass the test, the results from the DSP module and the reference model have to match 100%. Another important criterion for the DSP module is its timing. The delay between the input and the output has to be below a determined quantity in order to pass the test. Testing Approach In order to test the DSP module, a self checking testbench is going to be written using Verilog HDL. There is going to be a behaviorally modeled reference unit inside the test bench. The test bench is going to generate random stimuli. These stimuli are going to be applied to both a design under test unit (DUT), which is a module from the design, and the reference model. Then, the results are going to be compared in a scoreboard. The verification approach. Also, the timing of the system is going to be considered since it is one of the most important parts of the project. After the functional verification, the timing analysis of the implemented system must be done using Xilinx ISE. Testing Resources First, in order to test the algorithm, MATLAB Simulink is going to be used. In order to test the input module, preferably a PC or an electric guitar is going to be needed as discussed in the second paragraph. To test the output module, speakers or headphones are going to be needed. For functional verification of the DSP module, Modelsim XE is needed. Also, for the timing analysis of the design, Xilinx ISE is going to be used. Test Cases After these inputs are applied, the outputs from the reference model and the DUT are also going to be stored in response file, which is going to be in .txt format. Finally, a log file is going to show where the errors occurred, if there are any errors or it is going to show that no errors occurred in the simulation. Looking at the log file and the response file, we are going to able to see where exactly the errors occurred. Test Activities Since the group has just one member, every part of testing is going to be done by me. More training about writing self checking test benches using Verilog HDL should be done. 6. Schedule If we look at the PERT chart, we can calculate the critical path. The critical path consists of the following activities: A-F-G-H-I-J-K. This path leads to a completion time of 133 days. If the most optimistic and the most pessimistic completion of each activity is estimated, we can calculate the expected completion time and the variance of the project. The expression for the expected completion time is given in Equation and the expression for variance is given in Equation. Using these equations, the completion time and the variance are calculated. The activities in the critical path are highlighted and the calculations are done according to the critical path. PERT calculation gives almost the same result with the CPM result. CPM result was 133 days. PERT calculation gives an estimated project completion time of 133.166 days. Also, the variance turned out to be 26.58. This means the project can be completed 26.58 days earlier or later. The Gantt Chart of the project is given. The estimated start date of the project is December 27, 2009. The project is planned to be completed on May 9, 2010. Bibliography Azima DLI . (2009, February 8). Analog to Digital Conversion. Retrieved November 29, 2009, from Azima DLI Corporation Web Site: http://www.azimadli.com/vibman/analogtodigitalconversion.htm Collicut, M. (2009, March 3). Extending the Karplus-Strong Algorithm to Simulate Guitar Distortion and Feedback Effects. Retrieved November 29, 2009, from McGill University Web Site: http://mt.music.mcgill.ca/~collicuttm/MUMT618/KSA_distortion_and_feedback.html Jaffe, D. A., Smith, J. O. (1983). Extensions of the Karplus-Strong plucked string algorithm. Computer Music Journal , 56-69. Schulzrinne, H. (2008, January 9). Explanation of 44.1 kHz CD sampling rate. Retrieved November 27, 2009, from Columbia University Web Site: http://www.cs.columbia.edu/~hgs/audio/44.1.html Smith III, J. O. (2009, March 21). Dynamic Level Lowpass Filter. Retrieved November 28, 2009, from Stanford University Web Site: https://ccrma.stanford.edu/realsimple/faust_strings/Dynamic_Level_Lowpass_Filter.html Smith III, J. O. (2009, March 21). Pick-Direction Lowpass Filter. Retrieved November 28, 2009, from Stanford University Web Site: https://ccrma.stanford.edu/realsimple/faust_strings/Pick_Direction_Lowpass_Filter.html Smith III, J. O. (2009, March 21). Pick-Position Comb Filter. Retrieved November 28, 2009, from Stanford University Web Site: https://ccrma.stanford.edu/realsimple/faust_strings/Pick_Position_Comb_Filter.html Smith III, J. O. (2009, March 21). Two-Zero String Damping Filter. Retrieved November 28, 2009, from Stanford University Web Site: https://ccrma.stanford.edu/realsimple/faust_strings/Two_Zero_String_Damping_Filter.html Sullivan, C. R. (1990). Extending the Karplus-Strong Algorithm to Synthesize Electric Guitar Timbres with Distortion and Feedback. Computer Music Journal , 26-37. The Train Kept Rollin. (2009, November 21). Retrieved November 21, 2009, from allmusic: http://www.allmusic.com/cg/amg.dll?p=amgsql=33:jjfoxzq0ldte Xilinx Corporation. (2009, April 8). Getting Started with FPGAs FPGA vs. ASIC. Retrieved November 20, 2009, from Xilinx Corporation Web site: http://www.xilinx.com/company/gettingstarted/fpgavsasic.htm Xilinx Corporation. (2009, October 6). Spartan-3A Starter Kit. Retrieved November 27, 2009, from Xilinx Corporation Web site: http://www.xilinx.com/products/devkits/HW-SPAR3A-SK-UNI-G.htm

Sunday, October 13, 2019

Theories of Sociology :: Sociology Essays

There are many theories in sociology to get the better understanding of a society. Many things impact an individual’s behaviour, lifestyles, relationship and much more. Technology is one of the many things that affect the people. Internet is used worldwide and we can use sociology to determine what importance and place it holds in the society. To understand this invention and implication the society better, this paper will cover upon the three well-known theories which are’ Conflict, functionalism, and symbolic internationalism theories. There three theories look at the internet from different view point such as from a micro level and macro level. Functionalism focuses on the society as a whole, where as conflict theory focuses on group within that society, and symbol interactional focuses on individuals in the group and society. Thus, the view point changes depending on the population and the perspective. One of the important perspectives is a structural-functionalism. This approach focuses on â€Å"various components of society without prioritizing or assessing their importance to the social system as a whole; in effect all elements of a society are weighed equally† (Naiman, 2004; pp.18). In terms of internet, a functionalist would see the internet as a resource that brings efficiency in the lives of individuals for the reason that it is convenient. It also allows people to interact with one another around the world. A functionalist would ask questions regarding the function of the internet in the society would be: how does the internet help people access their provisions faster? How does it help people acquire knowledge and how can services be accessible to the demanding population? Thus, it mainly considers the benefits for all instead of few individuals. The internet is more efficiently used now a days because it saves one’s time. An individual can spend few minutes online, booking a flight ticket instead of going to the travel agency in-person or calling the customer services and going through a long hectic procedure. As the technology gets advance, it plays even greater role in shaping the lives of the individuals. Also, some people may wish to visit the bureaucracies. However, others may not have the time to visit the offices due to family responsibilities or for other reasons. Functionalist theorist would see a vast benefit when it comes to administrative related online services. It eliminates the waiting list and makes life easier for the citizens and it will take less space in the bureaucracies.

Saturday, October 12, 2019

From Western to Asian Environmental Ethics Essay examples -- Asia Reli

The 20th century may be considered the ultimate expression of Western ideals and philosophy: "civilized" humanity's attempt to dominate "uncivilized" peoples and nature. The 21st century soberingly proclaims the shortsightedness and ultimate unsustainability of this philosophy. This paper shows the limitations of a modern Western world-view, and the practical applicability of ideas to be found in Asian philosophies. In outline, the contrast may be portrayed by the following overgeneralizations: (1) From a linear to a cyclical world view; (2) from divine salvation to karmic necessity; (3) from human dominion over nature to human place within nature; (4) from the perfectibility of humanity and the world through science; (5) from atomistic mechanistic individualism to organic interdependence; (6) from competition to cooperation; (7) from glorification of wealth to respect for humanhood; (8) from absolute cultural values to necessary common values. Each of these attitudes is examined in light of what we now know about the world in the 21st century, as Asian philosophy is found applicable to address future problems. (1) From a linear to a cyclical worldview The Judaeo-Christian-Islamic world-view epitomizes linearity. God creates the world out of nothing and destroys it when he pleases; the world has a beginning and an end. Moreover, the beginning and end of the world are within human memory and anticipation; humans trace their lineage back to Adam and anticipate the end of the world. Recent Christians may argue for a more ancient beginning in the Big Bang, but seem no less convinced of the temporality and linearity of the human project. Humans are born from nothing, live only once on this world, and then return to dust or are j... ...f the earth. If the human project is to be maintained more than a few generations into the future, considerations of population control, biological diversity, sustainability of technologies, and responsibility to future generations become unavoidable. These depend not on cultural tastes or traditions; they become minimum prerequisites for human continuity. The shrinking of the globe and the foreshortening of history demand new common values, not based on the power of one group over another, but based on a consciousness of our organic interlinking with each other. Stripped of their cultural paraphernalia and chauvinisms, some Western as well as Asian religious philosophies may already hold this ideal, but one need not be religious to understand and espouse it. The survival of the planet as we know it demands nothing less than human cooperation in this project. From Western to Asian Environmental Ethics Essay examples -- Asia Reli The 20th century may be considered the ultimate expression of Western ideals and philosophy: "civilized" humanity's attempt to dominate "uncivilized" peoples and nature. The 21st century soberingly proclaims the shortsightedness and ultimate unsustainability of this philosophy. This paper shows the limitations of a modern Western world-view, and the practical applicability of ideas to be found in Asian philosophies. In outline, the contrast may be portrayed by the following overgeneralizations: (1) From a linear to a cyclical world view; (2) from divine salvation to karmic necessity; (3) from human dominion over nature to human place within nature; (4) from the perfectibility of humanity and the world through science; (5) from atomistic mechanistic individualism to organic interdependence; (6) from competition to cooperation; (7) from glorification of wealth to respect for humanhood; (8) from absolute cultural values to necessary common values. Each of these attitudes is examined in light of what we now know about the world in the 21st century, as Asian philosophy is found applicable to address future problems. (1) From a linear to a cyclical worldview The Judaeo-Christian-Islamic world-view epitomizes linearity. God creates the world out of nothing and destroys it when he pleases; the world has a beginning and an end. Moreover, the beginning and end of the world are within human memory and anticipation; humans trace their lineage back to Adam and anticipate the end of the world. Recent Christians may argue for a more ancient beginning in the Big Bang, but seem no less convinced of the temporality and linearity of the human project. Humans are born from nothing, live only once on this world, and then return to dust or are j... ...f the earth. If the human project is to be maintained more than a few generations into the future, considerations of population control, biological diversity, sustainability of technologies, and responsibility to future generations become unavoidable. These depend not on cultural tastes or traditions; they become minimum prerequisites for human continuity. The shrinking of the globe and the foreshortening of history demand new common values, not based on the power of one group over another, but based on a consciousness of our organic interlinking with each other. Stripped of their cultural paraphernalia and chauvinisms, some Western as well as Asian religious philosophies may already hold this ideal, but one need not be religious to understand and espouse it. The survival of the planet as we know it demands nothing less than human cooperation in this project.

Friday, October 11, 2019

History of Computers

Well, the English dictionary states that it is â€Å"Also called [a] processor. An electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations† (dictionary, 201 1). But, computers are much more than that. Computers are not Just pieces of equipment, they are tools that make up our everyday lives and greatly help and facilitate them; they make our lives faster, easier, simpler, and more efficient.They have only been around for a small amount time. They are part of the modern era† as some refer to it, and are the fastest growing technology in man's history (History of Computers, 2011). There are many debates going on about which computer was the first one to be invented. This question is very difficult to answer if it is not more specific. The reason being that it all depends on what you are looking for in a computer. There are many types of computers, and they can be arranged in categories.Some examples of categories include, analog computers, hybrid computers, portable computers, desktop computers, war computers, mainframe computers, mini computers, corrupters, and the list goes on and on (Types of Computers, 2011). The list could also Include things Like satellites, GAPS systems, and house security alarms. All these things can be called computers because they have characteristics of computers, and are processors. For this reason, there is no definite answer to the question â€Å"Which was the first computer ever built? â€Å". The question has not been left unanswered, though.The first programmable computer ‘Turing COLOSSUS' appeared in the year 1943, and by many has been named the first computer to exist. It was used to â€Å"decipher World War II coded messages from Germany† (The History of the Computer, 2011). This was the main task that computers had at that time. They were used as â€Å"war computers† and were used to encode and decode messages from enemies. As stated above, it was the â€Å"first programmable computer. This means that in that category, the programmable computer category, It was the first, but It does not mean that It was the first â€Å"computer† ever to be Invented.Others attribute the title of â€Å"first electronic computer† and â€Å"first computer† to MANIAC. This was â€Å"the brain† of Turing Colossus (The History of the Computer, 2011). MANIAC was developed by John W. Macaulay and J. Prosper Cocker at the university of Pennsylvania, and by many is considered the first computer. MANIAC set many records, including the cost, space, and material used to build it. It used an extraordinary number of 18,000 vacuum tubes and 1800 square Ft. Of space, to build. First Computers, 2011 MANIAC was a major step in the development of the computer, but two inventions that really spurred on the building of computers were the Silicon Chip and Transistor. Both made it possible for computers such as MANIAC, to be reduced to a much smaller size, which cost less and was also more efficient and safer. The Transistor was created by people working at Bell Labs, and the Silicon Chip was Invented by Jack SST. Claim Spiky of Texas Instruments. Colons Chips are still used In our modern portable computers, and they are the reason why we have portable computers since they greatly reduce the size needed for a processor. Amputees like MANIAC smaller, safer, and more affordable. This meant that instead of only government owning computers, now businesses could own computers. The computers were still too big, dangerous, and laborious for home use. The computers were not safe, because like the MANIAC, that had so many parts, it had to be maintained by professionals. These many parts also had to be replaced very often. Because of this, these tasks were extremely time consuming and meant that the computers were laboriously slow machines and were not yet efficient. The company that was responsible for many of the â€Å"first computers† was MOM.This company was the â€Å"unquestioned market leader in selling these large, expensive, error-prone, and very hard to use machines† (Mainframes to PC's, 2011). After the Silicon Chip came to be, the change from big computers to portable everyday-use computers, was under way. The portable computers started coming out in the early asses (Mainframes to PC's, 2011). The first major company to design computers was IBM but then companies like Apple, Microsoft, and Dandy Radio Shack started producing their own portable computers, which IBM had not yet done.IBM at this time was still not involved in the â€Å"portable computer business† (The History of the Computer, 2011). IBM was still producing government and business owned computers. The two first people to create computer code were Bill Gates and Paul Allen. Their program was called a BASIC program and later Bill Gates created Microsoft which old computer software (Personal Computer History, 2011). IBM was the first, though, to create a PC computer which could add pieces to its architecture (Mainframes to PC's, 2011).Apple's Macintosh was the first computer to come out with a GUI (graphical user interface). This meant that it could be programmed by people at home, was easy to use with its interface, and it included a mouse, which meant it was a personal favorite at home for the people, while IBM was well liked by businesses and big corporations because of its programs like Microsoft Word, Excel, Lotus 1-2-3, ND its spreadsheets (Mainframes to PC's, 2011). In 1977, Dandy Radio Shack and Apple had the only machines equipped with disk drives.This meant that their software could be sold on â€Å"floppy disks† and this made it easier for them, and helped their companies become very successful (Knight Dan, 2001). The portable computer industry continued to evolve and change, but it took a couple decades before th ey started producing the kinds of computers we have now. Next, came the evolution of the new portable computers. Just like the big computers MANIAC and UNIVAC, the portable computers had a â€Å"revolution† of their win, where they improved as better programs and better devices were created. This â€Å"revolution† is fairly recent, less than a decade ago, up until now.One of the major milestones in technological and computer advancements was the touchstones. Computers called tablet PC's started being produced, and touchstones smart-phones, too. Tablet PC's are capable of being written on with a special pen. Now, computer companies are trying to build the smallest, fastest, most portable computers, and these computers are being called â€Å"Notebooks†. Also, some of the newer computers re equipped with built-in internet, meaning that anywhere the user is, the computer receives an internet signal and it can connect to the World Wide Web (WWW).Apple â€Å"app stor e† of its own, meaning users can download applications ranging from school to games and these pap's prices range from free to around $50. Some have even called this a cultural revolution in computer development (Elliptic Antonio, 2011). There are also other computers that can have third-party software downloaded onto them. This means that any person that can create computer software can then share it with a community of people. This is all made possible by the World Wide Web. Computers have drastically changed the way we work, both the efficiency and productivity has sky rocketed.Computers are now used for science, calculations, medicine, and also things like D building. Our whole stock exchange market is made up of computers who calculate and then communicate the news to the public. Computers are also frequently used to create plans for buildings, homes, and businesses. They can also help save lives in the medicine and search fields. They can help prevent illnesses and can al so help find outbreaks from others and even cover new ones (The Pros and Cons of Technology Today, 2011). Next comes the World Wide Web (WWW), and it makes computers even more useful.The World Wide Web connects the entire world together with an internet network, and many new purposes for computers have arisen thanks to it. First of all, computers are very important for communication. We can now communicate with people on the other side of the world, in only a matter of seconds. This is done through social networks, emails, and instant messaging programs. All our technology has greatly increased our productivity as we can share our findings with others in a horn amount of time. Also, smart-phones have evolved so much that now they are being called â€Å"portable computers† themselves.We can almost do everything on our phones that we can do on our computers, now. This is the reason why many believe that in a few years, computers will have disappeared and smart-phones and tablet s will replace them. Some even say that in 2011, smart-phones and tablets will take over (Lour Steve, 2011). With all the positives stated above, there are also some â€Å"side-effects† created from computer usage. One example being that some people rely so much on technology hat when it fails them, they are unproductive. Also, computers now decrease the amount of exercise people get.Now, people tend to spend most of their day at work, on the computer ,and then at home in front of their TV's. This has greatly affected, for example, the obesity death rate in the United States. Around 600,000 adults each year die from physical inactivity and this number has been increasing each year (Obesity Levels in America, 2008). This is due to laziness among the people, created by use of technology, and has been an increasing factor in health issues. Many people are not getting enough exercise because they are on their computers too much, and this is one of the major negatives that compute rs have brought about.Also, it has been proven by many eye doctors that being in front of a computer screen for too long can damage the eye sight. Other health issues have arisen too concerning rays that could be emitted from computers and could be dangerous to our bodies (Hartmann Thomas, 2011). There have been many assumptions made about what will happen to the they will continue to evolve into ways that are at the moment unimaginable, and they ill make our lives more CEO-friendly. That is a major development right now in computers. Scientists are finding new ways to make houses CEO-friendly and more efficient in the way they work.This is all related to computers, since the houses are equipped with computer processors of their own. Some houses of the future will include automatic heating systems and automatic blinds. For example, they will receive data from a nearby weather station and then they will apply this to the house so that they can make the house cooler on a hot day and w armer on a cold day. They ill also have houses with refrigerators, for example, that display when they are about to run out of food and what they are running out of.There are lists that go on and on about the development of the future for computers and how they will change the way we think and live. Computers are huge parts of our daily lives and many experts believe that it will stay that way for a long time to come. They believe that the technology will improve and speed up our work and make it more efficient. There are others, though, that believe that computers are making our generations lazier and not as scholarly. This is major debate that is on going about computers and their pros and cons.