Saturday, August 22, 2020

Ways to Write Research Paper Cover Page

Cover sheet for your exploration paper isn't hard to make. There are 3 distinct configurations by utilizing them you can without much of a stretch make your cover sheet contingent upon the interest of your teacher. These three configurations are American Psychological Association (APA), the Modern Language Association (MLA) and Chicago style. These three organizations are utilized relying upon your necessities. Following are a few hints by for composing research paper spread page; Making a cover sheet with APA: The dividing of your title-Use return key to move your title down the page around 1/3 of your page. On the off chance that the title is long or has colons in it, at that point compose it in two lines. In this style, you must be as exact as could be expected under the circumstances. Underwrite the significant words in the title, for example, thing, action words, and descriptors however there is no compelling reason to underwrite insignificant words or words with under 3 letters. Compose your name-After composing the title of your exploration paper, compose your name however dont compose a title, for example, Dr., prof. In the event that there are at least two than two writers of the paper than think of them by isolating with commas and ‘and. Compose the name of your establishments Now compose the name of your college or the association you are connected with to tell the perusers where you have done a large portion of your exploration. In the event that there is just one writer or more than one writer having a place with similar associations, at that point compose the name of the association after posting down the names of writer/s. Be that as it may, if there are various writers associated with unexpected colleges in comparison to compose the name of each writer with the name of the association he/she is partnered with. Twofold dividing cover sheet Highlight your content by choosing the dispersing button from the HOME tab of a word record. At that point structure space drop-down menu, select ‘2 and your content is twofold dispersed at this point. Putting the title in focus on a level plane Highlight your content on the page at that point select the catch under section to focus the content. Include running header-A running header is utilized at the highest point of the page and goes all through the page. Here every single capital letter are utilized for a title. Organizing the page-12-point Times New Roman text style is utilized here and furthermore try to have 1 inch of edge on all sides. Making a cover sheet with MLA: Avoid the cover sheet except if it is required-In this style, the cover sheet isn't required except if asked by your instructor. In this organization, you essentially compose your title in the focal point of your page and begin composing your content from the following line. In the event that you are doing your cover sheet along these lines, at that point basically compose your name, your instructor name, course name, and the date by twofold dispersing it on the upper-left hand side. A header on the correct side incorporates your name and page number on the page. Skirt your page down-In this configuration as well; you need to begin composing 1/3 down of the page. Compose your title in one line regardless of whether it has semi-colon however on the off chance that it is too long to even think about adjusting in one line than compose it in two line by isolating them from the semi-colon. Likewise, underwrite significant words. Compose your name under the title-Leave a clear line and compose ‘By to put your name after it. in the event that there are two writers than independent their names with ‘and’ and if there are multiple writers than compose their name by isolating them with commas. Move to the base There you will have just three lines at the base where you need to compose your class name and area in one line, at that point teacher name in the subsequent line and date in the last and third line. Focus the content on a level plane Highlight the content, use section gathering and snap the middle catch to focus the content. Arrangement your page-Title page like rest of the paper ought to have 1-inch of configuration and with a coherent textual style, for example, TIMES NEW ROMAN, in 12-point size. Making cover sheet with Chicago style: Type your tile-Reach 1/3 down of your page and begin composing your title in capital words in a single line except if it has a caption. In the event that it has a caption, compose it in the following line. In any case, attempt to modify it in the principal line. Avoid some page-Leave four or five lines in the wake of recording the title and afterward begin composing the following piece of your cover sheet from that point. Compose name, class data, and date-Now compose your name, at that point hit the arrival key and compose class data and toward the end again press return key and compose the date. The date ought to be in the arrangement of; first the name of the month, the date and afterward the year. Focus the content Highlight the content, use passage gathering and snap the inside catch to focus the content. Organization your page-In this arrangement 1 or  ½ inch of edges is utilized, which is additionally applied to the remainder of the page. Chicago suggests TIMES NEW ROMAN and Palatino with 12-point text style.

Friday, August 21, 2020

Medical History essays

Clinical History articles All through any timeframe, numerous progressions happen, particularly inside the clinical field. In a portion of a century, the distractions and worries of the American doctor experienced a total change. Two sources are the reason for this correlation, the principal composed by Benjamin Rush entitled Observations of the Duties of a Physician and the other was the main Code of Ethics by the American Medical Association. As a fundamental distinction between the two sources, an assessment of the creators ought not be ignored. The previous source was composed by a solitary man, while the second was a community oriented exertion by an affiliation. Therefore, the information on medication was irresistibly spread all through the consistently developing populace in America. Over this spread of fifty years, other key contrasts can be watched. One that will be examined is the American Medical Association focused on a predominant degree of demonstrable skill. Another distinction inside the arti cles was the cultural class of doctors. The last seen perception was about the job the patients played. In the article composed by Benjamin Rush, he recommends certain conduct demonstrations to enable the doctor to mix with the remainder of society. A model and one of the most pivotal suggestions was to live in the nation, on a homestead. By following this recommendation, the doctor would demonstrate no prevalence over the ordinary citizens. In addition to the fact that predominance would be crushed, yet horticulture would profit. Medication is principally dependent on science which works straightforwardly with horticulture. Along these lines the doctor would share his disclosures for the progressions in medication, and advance enhancements inside the nation. Another advantage of living on a homestead that Rush depicted was the occupation for the off, or solid, season. Since the clinical field was not as flourished, doctors could ... <!

Monday, June 1, 2020

Cybercrime Criminal Offence

What is Cybercrime? At this point of time there is no commonly agreed definition of Cybercrime. The area of Cybercrime is very broad and the technical nature of the subject has made extremely difficult for authorities to come up with a precise definition of Cybercrime. The British police have defined Cybercrime as use of any computer network for crime and council of Europe has defined Cybercrime as any criminal offence against or with help of computer network. The two definitions offered by the British police and council of Europe are both very broad and they offer very little insight into the nature of conduct which falls under the defined term. Most of us do a vague idea what Cybercrime means but it seems that it is very to difficult to pinpoint the exact conduct which can be regarded as Cybercrime. For the purposes of the dissertation, I shall attempt to come up with my own definition of Cybercrime; the available definitions do not adequately explain the concept of Cybercrime. In order to understand and provide better insight into nature of Cybercrime, it will be a good idea to divide Cybercrime into two categories because computers can be used in two ways to commit Cybercrime. The first category will include crimes in which the computer was used as tool to commit the offence. The computer has enabled criminals to use the technology to commit crimes such as fraud and copyright privacy. The computer can be exploited just as another technical device which can be exploited, for e.g. a phone can be used to verbally abuse someone or stalk someone, someway the internet can be used to stalk someone or verbally abuse someone. The second category will include offences which are committed with intention of damaging or modifying computers. In this category the target of the crime is the computer itself, offences such as hacking. Whichever categories the offence committed falls in, ultimately it are us the humans who have to suffer the consequences of Cybercrime. Now we know that there are two ways in which the computer can be used to commit offences, my definition of Cybercrime would be: Illegal acts using the computer as instrument to commit an offence, or targeting a computer network to damage or modifying computers for malicious purposes Even my definition cannot be regarded as precise, as pointed earlier that due to the broad and technical nature of Cybercrime, it almost impossible to come up with a precise definition. The term Cybercrime is a social term to describe criminal activities which take place in world of computers; it is not an established term within the criminal law. The fact that there is no legal definition of Cybercrime within criminal law makes the whole area of Cybercrime very complicated for concerned authorities and the general public, it creates confusion such as what constitutes as Cybercrime and if Cybercrime cannot be defined properly how will the victims report the crime? The lack of proper definition means that majority of the Cybercrime which takes place is unreported as the victims and the authorities are not sure whether the conduct is a Cybercrime. It is estimated that 90% of the Cybercrime which occurs is unreported. Types of Cybercrime Computer can be used to commit various crimes, in order to have a better understanding of Cybercrime; we shall look at individually the types of crimes which are committed in the world of computers. It will not possible to describe every type of Cybercrime which exists due to the word limit, we will only concentrate on crimes which are considered to be major threats to our security. First Category Fraud Fraud can be defined as use of deception for direct or indirect financial or monetary gain. The internet can be used as means targeting the victim by replicating real world frauds such as get rich quick schemes which dont exist, emails which demand an additional fee to be paid via credit card to stop loss of service such as internet or banking. The increasing availability of the internet means that fraudsters can carry out fraudulent activities on a grand scale. Fraud is a traditional crime which has existed for centuries and internet is merely a tool by which the fraudsters actions are carried out. Fraud has become a serious threat to e-commerce and other online transactions. Statistics suggest that internet only accounts for 3% of credit card fraud, credit card fraud is one of the more difficult frauds to commit on the internet, however other forms of fraud such as phising are more easier to carry out using the internet and equally lucrative. Phising is a form of fraud which is rapidly increasing. Phising is when you get emails from commercial organizations such your bank and other financial institutions, they would ask you to update your details, emails look genuine and it is a scam to trick people on giving their details. There are no official figures available on phishing scams but on average I receive about three emails everyday asking me to update my bank account details. Recently there was email going around asking the staff members and students of LSBU to update their personal details, the email looked genuine but the ICT staff informed students/staff to ignore as it was a trick to gain personal information. Since the advancement of technology, it is has become easier and cheaper to communicate and fraudsters are also taking advantage of technology because it is easier to exploit the internet and it is cheaper than other alternatives such as phone and postal mail. There are other forms of fraud such as auction fraud, it is when buy goods in auction and you pay for the item but your item will never turn up. Fraud is one of the lucrative crimes on the internet; experts suggest that it is more than trafficking drugs. The reasons why fraudsters prefer internet is because: Internet has made mass communication easy and it is cheap, same email can be sent to millions of people very easily and cheaply with just one click of button. Majority of users do not have adequate knowledge on how technology works, this makes it easy for fraudsters to fool innocent people into taking an action such as giving their personal details. Internet users are considered nave in the sense that they have too much faith in the information they receive via the internet, therefore, they do not take necessary steps to verify the information and often get tricked in handing out their credit card or personal details. Offences against person(s) Offence against a person can either be physical or mental, it is not possible to cause direct physical harm to a person using a computer but it is possible to cause mental harm such as anxiety, distress or psychological harm. It can be done by sending abusive or threatening emails or posting derogatory information online. Stalking is a crime which is done to harass another person repeatedly. As the number of user on the internet increased, the opportunities for abuse have also increased. It is possible to use internet as a tool for sending abusive emails, leaving offensive messages on guestbooks, or posting misinformation on blogs. In some cases, cyberstalkers have morphed images of their victims onto pornographic images and then emailing the pictures to relative and work colleagues to cause embarrassment. There are mainly three reasons for committing a crime such as stalking, Main reason is usually when relationships fail, former intimates usually target their ex-boyfriend/girlfriend to get revenge. Second reason for cyberstalking is boredom; some people usually pick random people and target them by sending them abusive and threatening emails just for fun. Cyberstalkers take advantage of anonymity of the internet to cause distress to their victims life. Hate and racist speech is also a form of crime which escalated since the introduction of the internet; it can cause traumatic experience and mental distress to those who are targeted. Post 9/11, there have been many websites set up to mock the religion Islam, such as www.laughingatislam.com, this website has been cause of distress to many Muslims around the world. Sexual offences This category includes offences which have sexual element, such as making undesired sexual approaches in chat-rooms and paedophiles harrasing children. Child pornography and child protection are one of the main concerns on the internet. Paedophiles are taking full advantage to exploit the technology for viewing and exchanging child pornography. Paedophiles use the internet to their advantage, they use chat rooms and other popular social networks such as facebook to entice and lure children into meeting them. Many popular chat rooms such as MSN Chat and Yahoo chat have closed down their chat rooms to protect young children but closure of popular chat rooms have not stopped paedophiles from using less popular chat rooms and other social networks. Second category Hacking related offences Hacking can be defined as gaining unauthorised access to a computer system. As soon as we hear the word hacking, we tend to think that it is a crime, it should be noted that hacking started of as show of skill to gain temporary access to computer systems. It was rather an intellectual challenge than a criminal motive. But now, many hackers misuse their skills to inflict damage and destruction. Examples of hacking include stealing confident information such credit card details. In a recent incident of hacking, Harriet Harman whose is a politician, taking part in upcoming elections. Her website was hacked and the blog section of her website encouraged the audience to vote for Boris Johnson whose is a competitor of Harman Harriet. Boris Johnson has also complained that his email account was hacked recently. Most politicians believe that internet as a medium will be a major part of election campaigns and activities such as hacking can sabotage election campaigns by posting disinformation on candidates websites. Virus and Other Malicious Programs Virus is a malicious code or program that replicates itself and inserts copies or new versions onto other programs, affecting computer systems. Viruses are designed to modify computer systems without the consent of the owner or operator. Viruses are created to inflict senseless damage to computer system. It is a widely accepted perception that crime is committed in times economic distress. Criminals do not gain any monetary benefit; it is simply done to show off their computer skills. Some viruses are failed programs or accidental releases. The most famous virus which was released is the I LOVE YOU virus or commonly known as the love bug. The virus damaged millions of computers worldwide; it caused damage worth of $8.5bn, the author of the virus claims that it was released to impress his girlfriend. Legislation on Cybercrime It is often believed that the internet is just like the wild west where there no rules and regulations and people are free to carry out illegal activities. Fortunately, this is not true at all; there is legislation which exists to protect us from cybercrimes. Type of crime Legislation Fraud Fraud Act 2006(Covers all types of possible frauds) Offences against person(s) The Public Order Act 1986(Hate speech) Sexual Offences The Protection of children Act 1978 The Criminal Justice Act 1988 The Criminal Justice an Public Order Act 1988 Sexual Offences Act 2003 After carefully reviewing all pieces of legislations mentioned above, I can conclude by saying that legislation we have at the moment is adequate enough to protect us from any sort of traditional crime carried out using computers. There were few anomalies which have been removed now. Anomalies The Theft Act 1968 which previously covered fraud has been replaced by Fraud Act 2006 to cover anomaly under the previous legislation. In the case of Clayman, it was held that it is not unlawful to defraud a computer; the courts do not regard computers as deceivable as the process is fully automated. In theory if we apply the principle deriving from the Clayman case then it will not be unlawful to false credit card number when signing up for an online service such as subscription to a newsgroup or online gaming. There is only exception to this rule that it will not apply if deception involves licensed telecommunications services, such as dial-up chat lines pay-per-view TV. Second anomaly before us was that information was not regarded property. In the case of Oxford v Moss, in this case a student took a copy of forthcoming exam from a lecturers desk and made a photocopy of that exam paper, it was held that the student cannot be charged under the theft act as he did not deprive the owner of the asset, a copy had simply been taken. Computers only contain information, by applying the principle deriving from this case, it means that it is acceptable to print other peoples files as long they do not deprive the owner of the file by deleting it, one would only be prosecuted if he/she steals trade secret or confidential information. Decisions in both cases mentioned above are absurd, both of them were decided in 1970s, the only possible reason for reaching absurd decision could only be lack of knowledge on technology. Previous legislation took into account the consequences of the fraudsters activities when deciding whether the conduct in question is an offence. The Fraud Act 2006 aims to prosecute the fraudsters on the basis of their actual conduct rather than the consequences of their activities. How serious is the threat? In order to determine the seriousness of the threat, it is important to look at the statistics available on cybercrime. Type of crime Number of cases reported Fraud 299,000 Offences against the person 1,944000 Sexual offences 850,238 Computer Misuse(Hacking) 144,500 Virus related incidents 6,000000 Total number of cases reported 9,237738 Source of statistics: Garlik According to the figures, they were approximately 9.23 million incidents of cybercrime reported in the year 2006. Statistics show that 15% of the population of the UK was affected by cybercrime in someway, after looking at these figures; one can easily conclude that we are having an epidemic of cybercrime. These statistics could only the tip of the iceberg of the totality of cybercrime; experts believe that real figure could be 10 times higher as cybercrime is massively under-reported. Reasons for under-reporting Reporting any crime involves a three stage process: The conduct needs to be observed. The conduct needs to categorised as criminal. The relevant authorities need to be informed of the criminal conduct. A particular crime will not be reported if there is failure in any of the stages, therefore the relevant authorities will not take action against the criminal. There are certain factors which affect reporting of cybercrime, factors include: Sometimes the criminal conduct is not noticed, internet fraud usually comprises of low-value transactions across a bulk body of victims, and victims are not always able to spot discrepancy in their bank accounts. Lack of awareness means that the victims may not know whether the conduct in question is a crime. Victims of viruses dont see them as victims of crime, people tend to see viruses as technical issue, and therefore, the victim would believe that no one has broken the law. Most victims dont know which authorities they should contact to report cybercrime. Police officers have inadequate amount of resources and dont have the expertise to deal with cybercrime at the moment, therefore, pursuing a formal complaint can be a difficult process. Once I tried reporting a cybercrime, a laptop was purchased on EBay but the seller took my money and never sent the laptop, this is a common case of auction fraud. I did try to make a complain, the whole process was extremely slow, the officer dealing with me had no clue what EBay is, I was able to register a complaint but it has been two years and my complain is still unresolved. Under-reporting is factor which contributes towards increase in cybercrime, under-reporting mean that criminals will have less fear of getting caught and therefore, they are more likely to commit illegal acts online. Peoples attitude towards cybercrime Traditional crimes such as murder, rape and robbery can have serious effects on the victims life; in some cases the victim may not be able to lead a normal life after being a victim of crime. In contrast to cybercrime, the impact is not that serious, majority of users have insurance against financial frauds, and frauds are usually of low-value. Viruses can easily be filtered using antivirus software. Other offences such as cyberstalking usually cause some anxiety and distress. Only crimes such as child pornography have a greater impact, it is the only the crime which can have a serious consequences on the victims life. A recent survey suggest that only 37% are afraid to use the internet after being a victim of crime, majority of the users do continue to use the internet after being a victim. Cybercrime and e-commerce Cybercrime is a growing concern for all of us, however, the effects of cybercrime are not hindering the growth of the internet, and the effects of cybercrime on the e-commerce have not been drastic. Financial transactions over the internet are on the rise, number of people using internet for shopping is increase day by day and over one third of population is using internet banking. One of the reasons why cybercrime spiraling out of control is the fact that it is very easy to commit if have the technical knowledge, all you need is a computer connected to the internet, the crimes on the internet are hard to detect. It can be committed from anywhere in the world, the criminal could sitting in Africa and targeting his victim in Australia. In the next chapter, we shall examine the problems faced by authorities when investigating cybercrime. Jurisdiction Jurisdictional issues and the cyberspace Cyberspace is a world without defined boundaries; anyone can access any website using his computer. It can very difficult to locate the source of crime in cyberspace because relative anonymity and as easy way to shield identity. Even if the relevant authorities are able to identify the source of crime, it is not always easy to prosecute the criminal. Double criminality When dealing cross border crime, it is imperative that both countries should recognise the conduct as illegal in both jurisdictions. The principle of double criminality prohibits the extradition of a person, if the conduct in question is not recognized as a criminal offence by the country receiving the request for jurisdiction. Imagine a situation where a computer programmer from Zimbabwe sends Barclays bank a virus which causes the computers in Barclays bank to malfunction, the bank cannot carry out their business for 1 hour and as a result they lose about $1 million worth of revenue. English authorities would want to extradite the offender to England so they could prosecute the offender. In an action for extradition, the applicant is required to show that actions of the accused constitute a criminal offence exceeding a minimum level of seriousness in both jurisdictions. Imagine now that they are no laws on spreading viruses in Zimbabwe, therefore it will not possible to show offenders action constitute as criminal behavior. If they are no laws regarding on cybercrime in Zimbabwe then he cannot be extradited and he will walk free after deliberately causing damage to Barclays bank. Cybercrime has an international dimension, it is imperative that legal protection is harmonised internationally. There are still about 33 countries such as Albania, Yugoslavia and Malta; they have no laws on cybercrimes. If there are no laws then those countries are considered as computer crime havens. The perpetrator of I LOVE YOU virus which caused $8.5 billion worth damage was caught in Philippines but he could not be prosecuted as Philippines had no laws on cybercrime. Cybercrime is global issue and the world will need to work together in order to tackle cybercrime. How real world crime dealt across borders? In relation to real world crime which transcended national borders, an idiosyncratic network of Mutual Legal Assistance Treaties(MLATs) bound various countries to assist each other in investigating real world crime, such as drug trafficking. If there was no treaty agreement between two countries then they would contact the relevant authorities to ask for assistance and obtain evidence, this mechanism was sufficient in dealing with real world crime. This mechanism can only work if both countries have similar cybercrime laws; if any country lacks cybercrime laws then the process would fail. How should Jurisdiction be approached in Cybercrime? In a case of cyberstalking, An Australian man was stalking a Canadian Actress. The man harassed the actress by sending unsolicited emails. Australian Supreme Court of Victoria held that crimes committed over the internet knows no borders and State and national boundaries do not concern them, therefore, jurisdiction should not be the issue. He was convicted. This case was straightforward as both nations recognise stalking as a criminal offence, however, there can be conflicts if both nations do not recognize the act as criminal. In Licra v Yahoo, French courts tried to exercise jurisdiction over an American company. Yahoo was accused of Nazi memorabilia contrary to Article R645-1 of the French Criminal Code. Yahoo argued that there are not in breach of Article R645-1 as they were conducting the auction under the jurisdiction of USA and it is not illegal to sell Nazi memorabilia under the American law. In order to prove that Yahoo is subject American jurisdiction, they argued the following points: Yahoo servers are located in US territory. Services of Yahoo are primarily aimed at US citizens. According to the First Amendment to the United States Constitution, freedom of speech and expression is guaranteed and any attempt to enforce judgement which restricts freedom of speech and expression would fail for unconstitutionality. The court ruled that they have full jurisdiction over Yahoo because: The auction was open to worldwide bidders, including France. It is possible to view the auction in France, viewing and displaying Nazi memorabilia causes public nuisance and it is offence to public nuisance under the French law. Yahoo had a customer base in France, the advertisements were in French. Yahoo did have knowledge that French citizens use their site; therefore they should not do anything to offend French citizens. Yahoo ignored the French court ruling and kept saying that they French court does not have the right to exercise jurisdiction over an American company. Yahoo was warned that they would have to pay heavy fines if they dont comply. In the end Yahoo owners did comply with the judgement they had substantial assets in France which were at risk of being confiscated if they dont claim. The sole reason why French courts were able to exercise jurisdiction over Yahoo because it is a multinational company with large presence in France. Imagine instead of France, if the action would have been taken by courts of Saudi Arabia on auctioning playboy magazines, under the Saudi Arabian Sharia law, it is illegal to view or buy pornography. Saudi Arabia court would have failed to exercise jurisdiction over Yahoo as they dont have any presence in Saudi Arabia but it was possible to view Yahoo auctions from Saudi Arabia. The case of Yahoo is a rare example where a court was able to exercise jurisdiction over a foreign company. In majority of the cases concerning individuals, courts trying to exercise jurisdiction over foreign elements are usually ignored. In the case of Nottinghamshire County Council v. Gwatkin (UK), injunctions were issued against many journalists to prevent them from publishing disseminating a leaked report that strongly criticises [the Councils] handling of allegations of satanic abuse of children in the 1980s. Despite the injunctions, a report appeared on an American website. The website refused to respect the English jurisdiction as they argued that the report was a public document. The Nottinghamshire had no option then to drop the case. Cybercrime has an international dimension. International law is complicated area, it can be very difficult to co-operate with authorities if there is no or weak diplomatic ties, for e.g. Pakistan and Israel have no diplomatic ties, if a situation arises where Israeli citizen hacks into State bank of Pakistan steals millions of dollars from the bank, in a situation like this, one easily assume that both countries would not co-operate with each other even though both countries recognise hacking as a offence but they do not have diplomatic ties with each other, most probably the hacker would get away with the crime. A case involving Russian hackers, they hacked into Paypal and stole 53,000 credit card details. Paypal is an American company. The Russian hackers blackmailed Paypal and asked for a substantial amount of money, they threatened they would publish the details of 53,000 credit cards if they do not receive the money. Russia and American both have signed extradition treaty but still Russian authorities failed to take action, it is still not clear why they did not take appropriate action against the Russian hackers. Both nations struggled to gain jurisdiction over each other. FBI decided to take things into their own hands by setting up a secret operation, undercover agents posed as reprenstatives of a bogus security firm Invita. The bogus security firm invited the Russian hackers to US with prospects of employment. When the interview for employment by the bogus firm Invita was being carried out, the Russian hackers were asked to display there hacking skills, one of the hackers accessed his own system in Russia to show off his skills, the FBI recorded every keystroke and later arrested the Russian hackers for multiple offences such as hacking, fraud and extortion. The keystrokes recorded were later used to hack into one of the hackers computer in Russia to access incriminating evidence. All this took place without the knowledge of Russian authorities. When Russian authorities came to know about whole incident, they were furious and argued that US misused their authority and infringing on another sovereign nations jurisdiction. Lack of co-operation in relation to jurisdiction can lead to serious problems between nations, in order to avoid such conflicts, there is need to address the jurisdiction issue and come up with a mechanism which ensures that countries co-operate with each other. Where is the Jurisdiction? In the real world crime, the conduct and the effect of the conduct are easy to pin down because we can visibly see the human carrying out the conduct and the effect of the conduct is also visible. The location of the offence and the location of the perpetrator can easily be identified. Imagine a situation in which a shooter in Canada shots an American across Niagara Falls, it is clear from the example that the conduct took place in Canada and the effect of the conduct took place in Canada. Cyberspace is not real, people say that events on cyberspace occur everywhere and nowhere, a man disseminating a virus could release a virus which travel through servers of many different country before reaching the victim, for e.g. a person makes a racist website targeting Jews in Malta, uploads the website on American servers and the website is available for everyone to see, a Jewish living in Israel comes across the website and gets offended. In a situation like this where would you bring an action, should you bring an action in Malta because the perpetrator is based over there, would bring an action in America where the server is hosted or would bring the action in Israel where the victim is? There are specific laws regarding jurisdiction issues on the internet, the world is still struggling to come up with a solution which would solve the problem of jurisdiction. Positive or Negative Jurisdiction? The principle of negative jurisdiction occurs when no country is willing to exercise jurisdiction for a cybercrime. Cybercrime can have multiple victims in different countries; the love bug caused damage in many different countries including USA, UK, France and Germany. If the damage is caused to multiple countries then who should claim jurisdiction over the cybercrime, should it by prioritised by the amount of damage suffered by each country. If the effected countries decide not to take action against the perpetrator because it is not in their best interest, the country may be occupied by other internal problems. If no country is willing to exercise jurisdiction over a cybercrime then the perpetrator would walk free. Positive jurisdiction is opposite of negative jurisdiction, how will the issue of jurisdiction be decided if more than two countries want to exercise their jurisdiction over the perpetrator, it is a established principle that one cannot be tried in two different courts for the same offence, in a situation like this, the country which have suffered the most damage might be given priority. The area of positive and negative jurisdiction still remains unclear as there no cases or agreements to solve such a problem. Jurisdiction issues such as double criminality, determining jurisdiction and conflicts of positive and negative issues are the most complex issues of cybercrime, unless the issues are resolved, we cannot make any progress in curbing cybercrime. Council of Europe has been working on global governance model to deal with trans-border cybercrime. Council of Europe The Council of Europe began studying the cybercrime twenty years, when computers were first introduced; it was obvious that in the future they will be used to commit crime. After years of research, Council of Europe proposed a convention. The convention on cybercrime The convention on cybercrime is first international treaty on crimes committed using computers. The convention on cybercrime recognises that cybercrime is an international threat and proposes a traditional approach to the problem that the nation whose citizens suffer harm should exercise jurisdiction on the perpetrator. According to section 3 of the convention, it states that in the event of positive jurisdiction, nations should consult with each other to reach the best decision. Section 3 is unclear on positive jurisdiction which can lead to conflicts and slow or no co-operation between nations. I propose that section 3 should be amended and independent committee should be appointed by Council of Europe to decide the best course of action. The convention starts by stating that every member should define certain activities as criminal, and thus achieving international harmonisation and eradicating possible problems of double criminality. The aim of convention is to set up a fast and effective regime of international co-operation; this will be achieved by setting up a 24/7 point of contact for immediate assistance in every country. The convention requires nations to adopt a standard procedure when investigating and prosecuting cybercrime. It requires parties to adopt legislation that is designed to facilitate investigation by: Expediting the preservation and production of electronic evidence. Applying search and seizure law to computer systems. Authorizing law enforcement to collect traffic data and content data. Parties must also co-operate in: Extraditing offenders Sharing information Preserving, accessing, intercepting and disclosing traffic and content data. In simple language, we can interpret the convention on cybercrime is creating massive surveillance network and our civil liberties are under threat. Information can be exchanged between all national governments, is it a good idea to share information of British citizens with the French government?, is it necessary to monitor every internet user to control a very miniroty of cybercriminals. Will convention on cybercrime have any impact on curbing cybercrime? The prospects of success of convention of cybercrime are very low, so far the ratification of the treaty has extremely been a slow process, convention opened for signature and ratification on november 2001 and three year only thirty eight countries signed up, only eight ratified the treaty. The remaining thirty who have signed up are yet to ratify the treaty. There are one and ninty five countries in this world, the convention of cybercrime is open to all countries, only a small minority have signed up yet, how can the convention be a success if only a handful of countries are willing to participate. Internet is available everywhere, even in poorest countries such as Burkino Faso, if someone from Burkino Faso commits cybercrime, he will walk free because Burkino Faso do not have any laws on cybercrime and they are not part of convention on cybercrime. Other Alternatives The Evidence Imagine a man walks into into his local bank with the intention of robbing it. He goes in and uses the most clich dialouge this is a stick up, give me all the money and no one gets hurt, he passes a large sack with a dollar sign on it to the cashier and tells him to fill out up with cash or you will be hurt, the cashier carries out his intructions. The robber runners out of the bank with large bags of money, as soon as he outside, he drives off in BMW car. Whenever a crime is committed, the police is called to investigate and collect evidence so the robber could identified and the evidence should be strong that it should prove the case beyond reasonable doubt. When the police arrives to investigate the robbery, their first strategy will be to collect eyewitness testimony, they would ask questions that how did the robber look like, how tall was he and which car did he getaway in?. The police will have access CCTV footage which help to identify the robber. The second stage would be to collect physical evidence such as fingerprints. The police would also try to trace the car which he was driving, one of the customers in the bank managed to see the number plate of his car and using car registration number, the police is able to catch him, the search his house and are able to retrieve the stolen cash from the bank. The police is succesful in obtaining enough evidence to convict the robber beyond reasonable doubt. After spending 10 years in jail, the robber is released. He is planning another bank robbery but this time he would use a computer to steal the money instead of walking in a bank. He moves to South Africa, goes into a local internet caf and connects to the intenet. To disguise his tracks, he picks up networks with weak security, he hacks into server of university of South Wales, the server of universty of South Wales is operated by public library of Wales. Using the server of University of South Wales, he hacks into the server of public library of Wales and then from there he hack into the same bank which he robbed two years, he logs in, creates a dummy account and transfers the money to offshore bank account which is untraceable. The police again arrives to the same bank to investigate a robbery, the whole crime scene is completely different, there no eye witnesses, no one saw the robber and no physical evidence. The first strategy would be speak to the systems administrator and ask to gather all the information relation to the robbery that may be stored in the computer.After going through all the information, the police is able to find one piece of information which may lead to the robber. They manage to find the IP address, an IP address is the internets equivalent to phone number. The IP address tells them the hacking took place from the Public library of Wales, the police investigate the servers of public library of Wales and they find another IP address which originated from University of South Wales. They then move their investigation to University of South Wales and come to know that hacking took place from South Africa. The investigation moves to South Africa, they manage to track down the internet caf. After speaking to the staff of the internet caf, no one is able to give any clues, the system adminitrator does not keep comprehensive records, therefore it is not possible to gather further evidence against the robber. The robber moves to Canada shortly after carrying out the robbery. After being exhuasted, the police is not able to collect enough evidence against the robber which would prove the crime beyond reasonable. The criminal in this scenario is almost phantom, no one knows who he is and how he looks like. Digital evidence is fragile and not easy to collect because the computers have enchanced the ability cover up tracks. The cybercrime scene create significant forensic challenges for law enforcement agenices when obtaining evidence and subsequently presenting it before the courts. One of the biggest problems is that law enforcement agencies rely on third parties for evidence, in the scenario above, the third parties are University of South Wales and public library of Wales, if any of the third parties fail to keep comprehensive records then investigation will not be possible. There is no other source of collecting evidence, the only other source from which the evidence can come from is the computer which used to hack, most professional hackers destroy their laptops or replace the hardrive after carrying out the attack. The area of digital evidence is very technical and complex and research is still being carried in this area to make the whole process more efficient.

Saturday, May 16, 2020

Importance Of Computer Automation To Insurance Companies Finance Essay - Free Essay Example

Sample details Pages: 25 Words: 7565 Downloads: 10 Date added: 2017/06/26 Category Finance Essay Type Narrative essay Did you like this example? The importance of programming is of prime value for Actuarial Science and for the actuarial profession. The complex calculations merged with routine task based calculations have made programming a viable source for automation. In this dissertation, we show how the programming language, R can be used for claim models to compute aggregate claims using poisson, binomial and negative binomial distributions. We also demonstrate how to use MortalitySmooth package to compute deaths and exposure data suitable for smoothing mortality data. An essential aspect of this method is that smoothening of data allows forecasting of mortality data to use in computing annuities for different countries. We explain these methods using Danish dataset for aggregate claims and Human mortality database (HMD , https://www.mortality.org), a collection of mortality data of various developed countries. Don’t waste time! Our writers will create an original "Importance Of Computer Automation To Insurance Companies Finance Essay" essay for you Create order Chapter 1 Introduction The insurance firms functions making insurance products attains profitability through charging premiums surpassing overall expenses of the firm and making wise investment decisions in maximizing returns under optimal risk conditions. The method of charging premiums depends on so many underlying factors such as number of policy holders, number of claims, amount of claims, health condition, age, gender of the policy holder and so on. Some of these factors such as loss aggregate claims and human mortality rates have adverse impact on determining the premium calculation to remain solvent. Likewise, these factor need to be modelled using large amount of data, loads of simulations and complex algorithms to determine and manage risk. In this dissertation we shall consider two important factors affecting the premiums, the aggregate loss claims and human mortality. We shall use theoretical simulations using R and use Danish data to model aggregate claims and human mortality database to obtain human mortality rates and smoothen to price life insurance products respectively. In chapter 2 we shall examine the concepts of compounds distribution in modelling aggregate claim and perform simulations of the compound distribution using R packages such as MASS and Actuar. Finally we shall analyse Danish loss insurance data from 1980 to 1990 and fit appropriate distributions using customized generically implemented R methods. In chapter 3 we shall explain briefly on concepts of graduation, generalised linear models and smoothening techniques using B-splines. We shall obtain deaths and exposure data from human mortality database for selected countries Sweden and Scotland and shall implement mortality rates smoothing using mortalitysmooth package. We compare the mortality rates based on various sets such as Males and females for specific country or total mortality rates across countries like Sweden and Scotland for a given time frame ranging age wise or year wise. In chapter 4 we shall look into various life insurance and pension related products widely used in the insurance industry and construct life tables and commutation functions to implement annuity values. Finally chapter 5 we present concluding comments to the dissertation. Chapter 2 Aggregate Claim distribution 2.1 Background Insurance based companies implement numerous techniques to evaluate the underlying risk of their assets, products and liabilities on a day- to-day basis for many purposes. These include Computation of premiums Initial reserving to cover the cost of future liabilities Maintain solvency Reinsurance agreement to protect from large claims In general, the occurrence of claims is highly uncertain and has underlying impact on each of the above. Thus modelling total claims is of high importance to ascertain risk. In this chapter we shall define claim distributions and aggregate claims distributions and discuss some probabilistic distributions fitting the model. 2.2 Modelling Aggregate Claims The dynamics of insurance industry has different effects on the number of claims and amount of claims. For instance, Expanding insurance business would have proportional increase on number of claims but negligible or no impact on amount of claims. Conversely, cost control initiatives, technology innovations have adverse effect on amount of claims but has zero effect on number of claims. Consequently, the aggregate claim is modelled based on the assumption that the number of claims occurring and amount of claims are modelled independently. 2.2.1 Compound distribution model We define compound distribution as S Random variable denoting the total claims occurring in a fixed period of time. Denote the claim amount representing the i-th claim. N Non-negative, independent random variable denoting number of claims occurring in a time period. Further, is a sequence of i.i.d random variables with probability density function given by and cumulative density function by with probability of 0 is 1 for 1iN. Then we obtain the aggregate claims S as follows With Expectation and variance of S found as follows Thus S, the aggregate claims is computed using Collective Risk Model and follows compound distribution. (pg 86 Non-life actuarial model theory, methods and evaluation) 2.3 Compound distribution for aggregate claims As discussed in Section 2.1 S, follows compound distribution. were N, the number of claims is the primary distribution and X, the amount of claims being secondary distribution In this section we shall describe the three main compound distributions widely used to model aggregate claims models. The primary distribution, N can be modelled based on non-negative integer valued distributions like poisson, binomial and negative binomial. The selection of a distribution depends from case to case. 2.3.1 Compound Poisson distribution The Poisson distribution is referred to distribution of occurrence of rare events, Number of accidents per person, number of claims per insurance policy and number of defects found in product manufacturing are some of the real time examples of Poisson distribution. Here, the primary distribution N has a Poisson distribution denoted by N ~ P(ÃÆ'Ã… ½Ãƒâ€šÃ‚ » with parameter ÃÆ'Ã… ½Ãƒâ€šÃ‚ ». The probability density function, expectation and variance are given as follows for x=0,1. Then S has a compound Poisson distribution with parameters ÃÆ'Ã… ½Ãƒâ€šÃ‚ » and denoted as follows S ~ CP(ÃÆ'Ã… ½Ãƒâ€šÃ‚ », and 2.3.2 Compound Binomial distribution The binomial distribution is referred to distribution of number of success occurring in an event, the number of males in a company, number of defective components in random sample from a production process is real time examples representing this distribution. The compound binomial distribution is a natural choice to model aggregate claims when there is an upper limit on the number of claims in a given time period. Here, the primary distribution N has a binomial distribution with parameters n and p denoted by N ~ B(n,p. The probability density function, expectation and variance are given as follows For x=0,1,2.n Then S has a compound binomial distribution with parameters ÃÆ'Ã… ½Ãƒâ€šÃ‚ » and denoted as follows S ~ CB(n, p , -p) 2.3.3 Compound Negative Binomial distribution The compound negative binomial distribution models aggregate claim models. The variance of negative binomial is greater than its mean and thus we can use negative binomial over Poisson distribution if the data has greater variance than its mean. This distribution provides a better fit to the data. Here, the primary distribution N has a negative binomial distribution with parameters n and p denoted by N ~ NB(n,p with n0 and 0p1. The probability density function, expectation and variance are given as follows for x=0,1,2. / Then S has a compound negative binomial distribution with parameters ÃÆ'Ã… ½Ãƒâ€šÃ‚ » and denoted as follows S ~ CNB(n,p, 2.4 Secondary distributions claim amount distributions. In Section 2.3, we defined the three different compound distributions widely used. In this section, we shall define the generally used distributions to model secondary distributions for claim amounts. We use positive skewed distributions. Some of these distributions include Weibull distribution used frequently in engineering applications. we shall also look into specific distributions such as Pareto and lognormal which are widely used to study loss distributions. 2.4.1 Pareto Distribution The distribution is named after Vilfredo Pareto who used it in modelling economic welfares. It is used these days to model income distribution in economics. The random variable X has a Pareto distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( or X ~ Pareto(, ÃÆ'Ã… ½Ãƒâ€šÃ‚ ») The probability density function, expectation and variance are given as follows For x0 2.4.2 Log normal Distribution The random variable X has a Log normal distribution with parameters and where, 0 and is denoted by X ~ LN(, ), Where, and are the mean and variance of Log(X). The log normal distribution has a positive skew and is a very good distribution to model claim amount. The probability density function, expectation and variance are given as follows For x0 and 2.4.3 Gamma distribution The gamma distribution is very useful to model claim amount distribution it has , the shape parameter and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » the scale parameter. The random variable X has a Gamma distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( or X ~ Gamma(, ÃÆ'Ã… ½Ãƒâ€šÃ‚ ») The probability density function, expectation and variance are given as follows For x0 2.4.4 Weibull Distribution The Weibull distribution is extreme valued distributions, because of its survival function it is used widely in modelling lifetimes. The random variable X has a Weibull distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( The probability density function, expectation and variance are given as follows For x0 2.5 Simulation of Aggregate claims using R In section 2.3 we discussed about aggregate claims and the various compound distributions used to model it. In this section we shall perform random simulation using R program. 2.5.1 Simulation using R The simulation of aggregate claims was implemented using packages like Actuar, MASS. The generic R code available in Programs/Aggregate_Claims_Methods.r implements simulation of random generated aggregate claim of any compound distribution samples. The following R code below generates simulated aggregate claim data for Compound Poisson distribution with gamma as the claim distribution denoted by CP(10,. require(actuar) require(MASS) source(Programs/Aggregate_Claims_Methods.r) Sim.Sample = SimulateAggregateClaims (ClaimNo.Dist=pois, ClaimNo.Param =list(lambda=10),ClaimAmount.Dist=gamma,ClaimAmount.Param= list(shape = 1, rate = 1),No.Samples=2000 ) names(Sim.Sample) The SimulateAggregateClaims method in Programs/Aggregate_Claims_Methods.r generates and returns simulated aggregate samples along with expected and observed moments. The simulated data can then be used to perform various tests, comparisons and plots. 2.5.2 Comparison of Moments The moments of expected and observed are compared to test the correctness of the data. The following R code returns the expected and observed mean and variance of the simulated data Respectively. Sim.Sample$Exp.Mean;Sim.Sample$Exp.Variance Sim.Sample$Obs.Mean;Sim.Sample$Obs.Variance Table 2.1 Comparison of Observed and Expected moments for different sample size. The Table 2.1 above shows the simulated values for different Sample size. Clearly the observed and expected moments are similar and the difference between them converges as Number of sample increases. 2.5.3 Histogram with curve fitting distributions Histograms can provide useful information on skewness, information on extreme points in the data, the outliers and can be graphically measured or compared with shapes of standard distributions. The figure 2.1 below shows the fitted histogram of simulated data compared with standard distributions like Weibull, Normal, Lognormal and Gamma respectively. Figure 2.1 Histogram of simulated aggregate claims with fitted standard distribution curves. Figure 2.1 represents the histogram of the stimulated data along with the fitted curves for different distributions. The histogram is plotted by dividing them in to breaks of 50. The simulated data is then fitted using the fitdistr() function in the MASS package and fitted for various distributions like Normal,Lognormal,Gamma and Weibull distribution. The following R program describes how the fitdistr method is used to compute the Gamma parameters and plot the respective curve as described in Figure 2.1 gamma = fitdistr(Agg.Claims,gamma) Shape = gamma$estimate[1] Rate= gamma$estimate[2] Scale=1/Rate Left = min(Agg.Claims) Right = max(Agg.Claims) Seq = seq(Left,Right,by= 0.01) lines(Seq,dgamma(Seq,shape=Shape, rate= Rate, scale=Scale), col = blue) 2.5.4 Goodness of fit The goodness of fit compare the closeness of expected and observed values to conclude whether it is reasonable to accept that the random sample fits a standard distribution or not. It is type of hypothesis testing were the hypotheses are defined as follows. : Data fits with the standard distribution : Data does not fit with the standard distribution The chi-square test is one of the ways to test goodness of fit. The test uses histogram and compares it with the fitted density. It is used by grouping data into different intervals using k breaks. The breaks are computed using quantiles. This computes the expected frequency,. , the observed frequency is calculated using the product of difference of the c.d.f with sample size. The test statistic is defined as Where is the observed frequency and is expected frequency for k breaks respectively. To perform simulation we shall use breaks of 100 to split the data into equal cells of 100 and use histogram count to group the data based on the observed values. Large values of leads to rejecting null hypothesis The test statistic follows distribution with k-p-1 degrees of freedom where p is the number of parameters in sample data. The p-value is computed using 1- pchisq() and is accepted if p-value is greater than the significance level . The following R code computes chi-square test Test.ChiSq=PerformChiSquareTest( Samples.Claims= Sim.Sample$AggregateClaims,No.Samples=N.Samples) Test.ChiSq$DistName Test.ChiSq$X2Val;Test.ChiSq$pvalue Test.ChiSq$Est1; Test.ChiSq$Est2 Table 2.3 Chi-Square and p-value for compound Poisson distribution The highest p-value signifies better fit of data with the standard distribution. Weibull distribution is a better fit with the following parameters shape =2.348 and scale = 11.32. 2.6 Fitting Danish Data 2.6.1 The Danish data source of information In this section we shall use a statistical model and fit a compound distribution to compute aggregate claims using historical data. Fitting data into a probability distribution using R is an interesting exercise, and is worth quoting All models are wrong, some models are useful. In previous section we explained fitting distribution, comparison of moments and goodness of fit to simulated data. The data source used is Danish data composed from Copenhagen Reinsurance and contains over 2000 fire loss claims details recorded during 1980 to 1990 period of time. This data is adjusted for inflation replicating 1985 values and are expressed in Danish Krone (DKK) currencies in millions. The data recorded are large values and are adjusted for inflation. There are 2167 rows of data over 11 years. Grouping the data over years results in 11 aggregate samples of data. This would be insufficient information to fit and plot the distribution. Therefore, the data is grouped month-wise aggregating to 132 samples. The expectation and variance of the aggregate claims are 55.572 and 1440.7 respectively. The figure 2.2 shows the time series plot against the claim numbers inferring the different claims occurred monthly from 1980 to 1990, it also shows the extreme values of loss claims and the time of occurrence. There are no seasonal effects on the data as the 2 sample test for summer and winter data is compared and the t-test value infers there is no difference and conclude that there is no seasonal variation. Figure 2.2 Time series plot of Danish fire loss insurance data month wise starting 1980-1990. The data is plotted and fitted into an histogram using fitdistr() function in MASS package of R. 2.6.2 Analysis of Danish data We shall do the following steps to analyse and fit the data. Obtain the claim numbers and loss aggregate claim data month wise. Choose primary distribution to be Poisson or negative binomial and use fitdistr() function to obtain the parameters. Assume Gamma distribution as the default loss claim distribution and use fitdistr() to obtain the shape and rate parameters. Simulate for 1000 samples using section 2.5.1, also plot the histogram along with the fitted standard distributions as described in section 2.5.2. Perform chi-square test to identify the optimal fit and obtain the distribution parameters. Finally implement another simulation using the primary distribution and fitted secondary distribution. 2.6.3 R Implementation We will do the following to implement R. The Danish data is assumed to take gamma distribution Plot the computed aggregate claims and use fitdistr() to get the parameters using gamma or lognormal. Now, using generic R implementation discussed in Section 2.5 we simulate using the new dataset and finally fit with standard distributions. The following R code reads the Danish data available in DataDanishData.txt, segregate the claims month and year wise, to calculate sample mean and variance and plots the histogram with fitted standard distributions. require(MASS) source(Programs/Aggregate_Claims_Methods.r) Danish.Data = ComputeAggClaimsFromData(Data/DanishData.txt) Danish.Data$Agg.ClaimData = round(Danish.Data$Agg.ClaimData, digits = 0) #mean(Danish.Data$Agg.ClaimData) #var(Danish.Data$Agg.ClaimData) #Danish.Data$Agg.ClaimData #mean(Danish.Data$Agg.ClaimNos) #var(Danish.Data$Agg.ClaimNos) Figure 2.3 Actual Danish fire loss data fitted with standard distributions of 132 samples. In the initial case N, the primary distribution is assumed to be Negative Binomial distributed with parameter; k= 25.32 and p=.6067 and the secondary distribution is assumed to be gamma distribution with parameters; Shape =3.6559 and rate =.065817. We simulate using 1000 samples and obtain aggregate claim samples using Section 2.5.1. The plot and chi square test values are defined below as follows. The generic function PerformChiSquareTest, previously discussed in Section 2.4 is used here to compute values of and p-value pertaining to = distribution. The corresponding values are tabulated in table 2.2 below. Figure 2.4 Histogram of simulated samples of Danish data fitted with standard distributions The figure 2.4 shows simulated samples of Danish data calculated for sample size 1000, The figure also shows the different distribution curves fitted to the simulated data. These results suggest that the best possible choice of model is Gamma distribution with parameters Shape = 8.446 and Rate = .00931 Chapter 3 Survival models Graduation In the previous chapter 2, we discussed about aggregate claims and how it can be modelled and simulated using R programming. In this chapter we shall discuss on one of the important factors which has direct impact on arise of a claim, the human mortality. Life insurance companies use this factor to model risk arising out of claims. We shall analyse and investigate the crude data presented in human mortality database for specific countries like Scotland and Sweden and use statistical techniques. Mortality smooth package is used in smoothing the data based on Bayesian information criterion BIC, a technique used to determine smoothing parameter; we shall also plot the data. Finally we shall conclude by performing comparison of mortality of two countries based on time. 3.1 Introduction Mortality data in simple terms is recording of deaths of species defined in a specific set. This collection of data could vary based on different variables or sets such as sex, age, years, geographical location and beings. In this section we shall use human data grouped based on population of countries, sex, ages and years. Human mortality in urban nations has improved significantly over the past few centuries. This has attributed largely due to improved standard of living and national health services to the public, but in latter decades there has been tremendous improvement in health care in recent measures which has made strong demographic and actuarial implications. Here we use human mortality data and analyse mortality trend compute life tables and price different annuity products. 3.2 Sources of Data Human mortality database (HMD) is used to extract data related to deaths and exposure. These data are collected from national statistical offices. In this dissertation we shall look into two countries Sweden and Scotland data for specific ages and years. The data for specific countries Sweden and Scotland are downloaded. The deaths and exposure data is downloaded from HMD under Sweden Deaths https://www.mortality.org/hmd/SWE/STATS/Deaths_1x1.txt They are downloaded and saved as .txt data files in the respective hard disk under /Data/Conutryname_deaths.txt and /Data/Conutryname_exposures.txt respectively. In general the data availability and formats vary over countries and time. The female and male death and exposure data are shared from raw data. The total column in the data source is calculated using weighted average based on the relative size of the two groups male and female at a given time. 3.3 Gompertz law graduation A well-known actuary, Benjamin Gompertz observed that over a long period of human life time, the force of mortality increases geometrically with age. This was modelled for single year of life. The Gompertz model is linear on the log scale. The Gompertz law states that the mortality rate increases in a geometric progression. Hence when death rates are A0 B1 And the liner model is fitted by taking log both sides. = a + bx Where a = and b = The corresponding quadratic model is given as follows 3.3.1 Generalized Linear models are P-Splines in smoothing data Generalized Linear Models (GLM) are an extension of the linear models that allows models to be fit to data that follow probability distributions like Poisson, Binomial, and etc. If is the number of deaths at age x and is central exposed to risk then By maximum likelihood estimate we have and by GLM, follows Poisson distribution denoted by with a + bx We shall use P-splines techniques in smoothing the data. As mentioned above the GLM with number of deaths follows Poisson distribution, we fit a quadratic regression using exposure as the offset parameter. The splines are piecewise polynomials usually cubic and they are joined using the property of second derivatives being equal at those points, these joints are defined as knots to fit data. It uses B-splines regression matrix. A penalty function of order linear or quadratic or cubic is used to penalize the irregular behaviour of data by placing a penalty difference. This function is then used in the log likelihood along with smoothing parameter .The equations are maximised to obtain smoothing data. Larger the value of implies smoother is the function but more deviance. Thus, optimal value of is chosen to balance deviance and model complexity. is evaluated using various techniques such as BIC Bayesian information criterion and AIC Akaikes information criterion techniques. Mortalitysmooth package in R implements the techniques mentioned above in smoothing data, There are different options or choices to smoothen using p-splines, The number of knots ndx ,the degree of p-spine whether linear,quadratic or cubic bdeg and the smoothning parameter lamda. The mortality smooth methods fits a P-spline model with equally-spaced B-splines along x There are four possible methods in this package to smooth data, the default value being set is BIC. AIC minimization is also available but BIC provides better outcome for large values. In this dissertation, we shall smoothen the data using default option BIC and using lamda value. 3.4 MortalitySmooth Package in R program implementation In this section we describe the generic implementation of using R programming to read deaths and exposure data from human mortality database and use MortalitySmooth package to smoothen the data based on p-splines. The following code presented below loads the require(MortalitySmooth) source(Programs/Graduation_Methods.r) Age -30:80; Year - 1959:1999 country -scotland ;Sex - Males death =LoadHMDData(country,Age,Year,Deaths,Sex ) exposure =LoadHMDData(country,Age,Year,Exposures,Sex ) FilParam.Val -40 Hmd.SmoothData =SmoothenHMDDataset(Age,Year,death,exposure) XAxis - Year YAxis-log(fitted(Hmd.SmoothData$Smoothfit.BIC)[Age==FilParam.Val,]/exposure[Age==FilParam.Val,]) plotHMDDataset(XAxis ,log(death[Age==FilParam.Val,]/exposure[Age==FilParam.Val,]) ,MainDesc,Xlab,Ylab,legend.loc ) DrawlineHMDDataset(XAxis , YAxis ) The MortalitySmooth package is loaded and the generic implementation of methods to execute graduation smoothening is available in Programs/Graduation_Methods.r. The step by step description of the code is explained below. Step:1 Load Human Mortality data Method Name LoadHMDData Description Return an object of Matrix type which is a mxn dimension with m representing number of Ages and n representing number of years. This object is specifically formatted to be used in Mortality2Dsmooth function. Implementation LoadHMDData(Country,Age,Year,Type,Sex) Arguments Country Name of the country for which data to be loaded. If country is Denmark,Sweden,Switzerland or Japan the SelectHMDData function of MortalitySmooth package is called internally. Age Vector for the number of rows defined in the matrix object. There must be atleast one value. Year Vector for the number of columns defined in the matrix object. There must be atleast one value. Type A value which specifies the type of data to be loaded from Human mortality database. It can take values as Deaths or Exposures Sex An optional filter value based on which data is loaded into the matrix object. It can take values Males, Females and Total. Default value being Total Details The method LoadHMDData in Programs/Graduation_Methods.r reads the data availale in the directory Data to load deaths or exposure for the given parameters. The data can be filtered based on Country, Age, Year, Type based on Deaths or Exposures and lastly by Sex. Figure: 3.1 Format of matrix objects Death and Exposure. The Figure 3.1 shows the format used in objects Death and Exposure to store data. A matrix object representing Age in rows and Years in column. The MortalitySmooth package contains certain features for specific countries listed in the package. They are Denmark,Switzerland,Sweden and Japan. These data for these countries can be directly accessed by a predefined function SelectHMDData. LoadHMDData function checks the value of the variable country and if Country is equal to any of the 4 countries mentioned in the mortalitysmooth package then SelectHMDData method is internally called or else customized generic function is called to return the objects. The return objects format in both functions remains exactly the same. Step 2: Smoothen HMD Dataset Method Name SmoothenHMDDataset Description Return a list of smoothened object based BIC and Lamda of matrix object type which is a mxn dimension with m representing number of Ages and n representing number of years. This object is specifically formatted to be used in Mortality2Dsmooth function. Returns a list of objects of type Mort2Dsmooth which is a two-dimensional P-splines smooth of the input data and order fixed to be default. These objects are customized for mortality data only. Smoothfit.BIC and Smoothfit.fitLAM objects are returned along with fitBIC.Data fitted values. SmoothenHMDDataset (Xaxis,YAxis,ZAxis,Offset.Param) Arguments Xaxis Vector for the abscissa of data used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here Age vector is value of XAxis. Yaxis Vector for the ordinate of data used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here Year vector is value of YAxis. .ZAxis Matrix Count response used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here Death is the matrix object value for ZAxis and dimensions of ZAxis must correspond to the length of XAxis and YAxis. Offset.Param A Matrix with prior known values to be included in the linear predictor during fitting the 2d data. Here exposure is the matrix object value and is the linear predictor. Details. The method SmoothenHMDDataset in Programs/Graduation_Methods.r smoothens the data based on the death and exposure objects loaded as defined above in step 1. The Age, year and death are loaded as x-axis, y-axis and z-axis respectively with exposure as the offset parameter. These parameters are internally fitted in Mortality2Dsmooth function available in MortalitySmooth package in smoothing the data. Step3: plot the smoothened data based on user input Method Name PlotHMDDataset Description Plots the smoothened object with the respective axis, legend, axis scale details are automatics customized based on user inputs. Implementation PlotHMDDataset (Xaxis,YAxis,MainDesc,Xlab,Ylab,legend.loc,legend.Val,Plot.Type,Ylim) Arguments Xaxis Vector for plotting X axis value. Here the value would be Age or Year based on user request. Yaxis Vector for plotting X axis value. Here the value would be Smoothened log mortality vales filtered for a particular Age or Year. MainDesc Main details describing about the plot. Xlab X axis label. Ylab Y axis label. legend.loc A customized location of legend. It can take values topright,topleft legend.Val A customized legend description details it can take vector values of type string. Val,Plot.Type An optional value to change plot type. Here default value is equal to default value set in the plot. If value =1, then figure with line is plotted Ylim An optional value to set the height of the Y axis, by default takes max value of vector Y values. Details The generic method PlotHMDDataset in Programs/Graduation_Methods.r plots the smoothened fitted mortality values with an option to customize based on user inputs. The generic method DrawlineHMDDataset in Programs/Graduation_Methods.r plots the line. Usually called after PlotHMDDataset method. 3.5 Graphical representation of smoothed mortality data. In this section we shall look into graphical representation of mortality data for selected countries Scotland and Sweden. The generic program discussed in previous section 3.4 is used to implement the plot based on customized user inputs. Log mortality of smoothed data v.s actual fit for Sweden. Figure 3.3 Left panel: Plot of Year v.s log(Mortality) for Sweden based on age 40 and year from 1945 to 2005. The points represent real data and red and blue curves represent smoothed fitted curves for BIC and Lamda =10000 respectively. Right panel: Plot of Age v.s log(Mortality) for Sweden based on year 1995 and age from 30 to 90. The points represent real data red and blue curves represent smoothed fitted curves for BIC and Lamda =10000 respectively. Log mortality of smoothed data v.s actual fit for Scotland Figure 3.4 Left panel: Plot of Year v.s log(Mortality) for Scotland based on age 40 and year from 1945 to 2005. The points represent real data and red and blue curves represent smoothed fitted curves for BIC and Lamda =10000 respectively. Right panel: Plot of Age v.s log(Mortality) for Scotland based on year 1995 and age from 30 to 90. The points represent real data red and blue curves represent smoothed fitted curves for BIC and Lamda =10000 respectively. Log mortality of Females Vs Males for Sweden The Figure 3.5 given below represents the mortality rate for males and females in Sweden for age wise and year wise. 3.5 Left panel reveals that the mortality of male is more than the female over the years and has been a sudden increase of male mortality from mid 1960s till late 1970s for male The life expectancy for Sweden male in 1960 is 71.24 vs 74.92 for women and it had been increasing for women to 77.06 and just 72.2 for male in the next decade which explains the trend. Figure 3.5 Left panel: Plot of Year v.s log(Mortality) for Sweden based on age 40 and year from 1945 to 2005. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and females respectively. Right panel: Plot of Age v.s log(Mortality) for Sweden based on year 2000 and age from 25 to 90. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and females respectively. The Figure 3.5 represents the mortality rate for males and females in Sweden for age wise and year wise. 3.5 Left panel reveals that the mortality of male is more than the female over the years and has been a sudden increase of male mortality from mid 1960s till late 1970s for male The life expectancy for Sweden male in 1960 is 71.24 vs 74.92 for women and it had been increasing for women to 77.06 and just 72.2 for male in the next decade which explains the trend. (https://www.scb.se/Pages/TableAndChart____26041.aspx) The 3.5 Right panel shows the male mortality is more than the female mortality for the year 1995, The sex ratio for male to female is 1.06 at birth and has been consistently decreasing to 1.03 during 15-64 and .79 over 65 and above clearly explaining the trend for Sweden mortality rate increase in males is more than in females. (https://www.indexmundi.com/sweden/sex_ratio.html) Log mortality of Females Vs Males for Scotland Figure 3.6 Left panel: Plot of Year v.s log(Mortality) for Scotland based on age 40 and year from 1945 to 2005. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and females respectively. Right panel: Plot of Age v.s log(Mortality) for Scotland based on year 2000 and age from 25 to 90. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and females respectively. The figure 3.6 Left panel describes consistent dip in mortality rates but there has been a steady increase in mortality rates of male over female for a long period starting mid 1950s and has been steadily increasing for people of age 40 years.The 3.6 Right panel shows the male mortality is more than the female mortality for the year 1995, The sex ratio for male to female is 1.04 at birth and has been consistently decreasing to .94 during 15-64 and .88 over 65 and above clearly explaining the trend for Scotland mortality rate increase in males is more than in females. https://en.wikipedia.org/wiki/Demography_of_Scotland . Log mortality of Scotland Vs Sweden Figure 3.7 Left panel:- Plot of Year v.s log(Mortality) for countries Sweden and Scotland based on age 40 and year from 1945 to 2005. The red and blue points represent real data for Sweden and Scotland respectively and red and blue curves represent smoothed fitted curves for BIC Sweden and Scotland respectively. Right panel:- Plot of Year v.s log(Mortality) for countries Sweden and Scotland based on year 2000 and age from 25 to 90. The red and blue points represent real data for Sweden and Scotland respectively and red and blue curves represent smoothed fitted curves for BIC Sweden and Scotland respectively. The figure 3.7 Left Panel shows that the mortality rates for Scotland are more than Sweden and there has been consistent decrease in mortality rates for Sweden beginning mid 1970s where as Scotland mortality rates though decreased for a period started to show upward trend, this could be attributed due to change in living conditions. Chapter 4 Pricing Life insurance products using mortality rates In the previous chapter 3 we discussed the methodology used in constructing mortality rates from Human Mortality Database and smoothing them using MortalitySmooth package. The smoothed graduated data is then used in life insurance companies to estimate pricing in insurance products like annuity and life insurance. Decline in mortality in general has posed one of the key challenges to actuaries in planning, estimating and designing public retirement and life annuities for smooth functioning of the business. Also, calculation of optimal expected present values required in pricing and reserving of long-term benefits depends on projected mortality values. This process eliminates the scope of future insolvency situations and safeguards from wrong projection of future cost. Therefore, actuaries use lifetables to analyse risk and estimate them efficiently. In this chapter we shall discuss about different methods involved in constructing lifetables and commutation functions using mortality rates. These computed values are used to price different insurance products like annuity, term annuity, deferred annuity, life insurance, term insurance, deferred insurance and so on. 4.1 Life insurance systems and commutation functions In this section we shall briefly describe some of the basic insurance products used in insurance industry and state the respective commutation functions. In view of the fact that, most calculations involves computation of expected present values for death benefits paid to the insurer or periodic annuity payments until death of the policy holder. Thus we define basic notations as follows discounted value for x years, where interest rate i is assumed to be .04 and Expected number of survivors at aged x. We can assume to be 100000 Expected number of deaths between x and x + 1. and and and 4.2 Life annuity Whole life annuity payable in advance Payment of 1 made at the beginning of each year while the policy taken at age x by the policy holder is alive. Whole life annuity payable in arrears Payment of 1 made at the end of each year while the policy taken at age x by the policy holder is alive. = Whole life annuity payable continuously Payment of 1 made at the end of each year while the policy taken at age x by the policy holder is alive. n year Temporary annuity payable in advance Payment of 1 made at the beginning of each year while the policy taken at age x by the policy holder is alive for a maximum of n years. n-year Deferred annuity payable in advance Payment of 1 made at the beginning of each year while the policy taken at age x by the policy holder is alive. The first payment is made to the policy holder at age x + n. The commutation function is given as follows Increasing annuity Immediate annuity due paying 1 now, 2 in next year and so on provided the policy holder is alive when the payment is due. The commutation function is defined as follows. 4.3 Life insurance Whole life insurance Death benefit of 1 payable at end of year of death of a policy holder currently aged x for death occurring anytime in near future. n-year Term insurance Death benefit of 1 payable at end of year of death of a policy holder currently aged x for death occurring within n years. n-year Pure Endowment Benefit of 1 payable at end of n years of period provided the policy holder is still alive. n-year Endowment Benefit of 1 payable immediately on the death of policy holder within n years or at the end of n years if policy holder is still alive at age x + n. This shows it is the sum of n-year term insurance and n-year pure endowment as follows. Increasing Whole life insurance Benefit payable at end of the year of death of the policy holder where the amount of payment is k+1 if policy holder dies between age x+k and x+k+1. The commutation function is defined as follows. 4.4 R program implementation In this section we shall explain the different steps applied to price insurance products. 4.4.1 Construct lifetables and commutation functions. The smoothed mortality data is used to compute other lifetables values such as ,, etc. These vector values are in turn used to construct commutation functions variable values such as , , , , . Finally, Annuity and life insurance products are calculated, plotted and tabulated. CalculateCommFunctions Method Name CalculateCommFunctions Description Construct life table values and commutation function values and returns a list of commutation function variables using as input values. Implementation CalculateCommFunctions (mux) Arguments mux Vector value of smoothened data. Details The function CalculateCommFunctions is used to return computed commutation function values. The is assumed as 100000 and values of is used to compute . These values are looped to calculate respective commutation function variables and returned as a list. Computation and graphical representation of Life insurance products Whole life annuity Method Name ComputeAnnuity.Life Description Returns vector value containing computed annuity life payable in advance. The interest rate is assumed at 4% Implementation ComputeAnnuity.Life (index,CommFunc) Arguments index length of the annuity vector. CommFunc list containing the required values of commutation variable required to compute annuity values. Details The function Calculates life annuity using, vector as input values in the CommFunc parameter as list. Figure 4.1 Plot of age v.s annuity prices for males and females based on year 2000 and age from 20 to 90. The red and blue curves represent smoothed fitted curves for males and females respectively. The Left panel represents plot for Sweden and right panel represents plot for Scotland. From figure 4.1 we infer that annuity prices for males and females in Scotland are more expensive than males and females in Sweden, It is because the mortality rates of Sweden is lesser than mortality rates of Scotland as discussed in Section 3.5. Also In general Males annuity prices are more expensive than females in each country because mortality rates of males are more than the females as discussed in Section 3.5. ComputeWholeInsurance.Life Method Name ComputeWholeInsurance.Life Description Returns vector value containing computed whole insurance life. Implementation ComputeWholeInsurance.Life (index,CommFunc) Arguments index length of the annuity vector. CommFunc list containing the required values of commutation variable required to compute whole insurance values. Details The function calculates whole life insurance using, vector values as mentioned above in previous section. Figure 4.2 Plot of age v.s Whole life insurance prices for males and females based on year 2000 and age from 20 to 90. The red and blue curves represent smoothed fitted curves for males and females respectively. The Left panel represents plot for Sweden and right panel represents plot for Scotland. From figure 4.2 we infer that whole life insurance prices increases as age increases and based on the y axis scales we can infer that Scotland whole life insurance prices are more than the Sweden. In general, females whole life insurance are less expensive than males due to lesser mortality rates as discussed in Section 3.5. Compute Increasing WholeInsurance.Life Method Name ComputeIncreasingWholeInsurance.Life Description Returns vector value containing computed increasing whole insurance life. Implementation ComputeIncreasingWholeInsurance.Life (index,CommFunc) Arguments Index length of the annuity vector. CommFunc list containing the required values of commutation variable required to compute whole insurance values. Details The function calculates whole life insurance using, vector values as mentioned above in previous section. Figure 4.3 Plot of age v.s Increasing Whole life insurance prices for males and females based on year 2000 and age from 20 to 90. The red and blue curves represent smoothed fitted curves for males and females respectively. The Left panel represents plot for Sweden and right panel represents plot for Scotland. From figure 4.3 we infer that whole life insurance prices increases as age increases until 60 and decrease rapidly till age reaches 90 and based on the y axis scales we can infer that Scotland whole life insurance prices are more than the Sweden. In general, females increasing whole life insurance are less expensive than males but converges as age appoaches to 90 this is due to lesser mortality rates as discussed in Section 3.5. Compute Increasing Annuity.Life Method Name ComputeIncreasingAnnuity.Life Description Returns vector value containing computed increasing annuity life. The interest rate is assumed at 4% Implementation ComputeIncreasingAnnuity.Life (index,CommFunc) Arguments Index length of the annuity vector. CommFunc list containing the required values of commutation variable required to compute whole insurance values. Details The function calculates whole life insurance using, vector values as mentioned above in previous section. Figure 4.4 Plot of age v.s Increasing Whole life insurance prices for males and females based on year 2000 and age from 20 to 90. The red and blue curves represent smoothed fitted curves for males and females respectively. The Left panel represents plot for Sweden and right panel represents plot for Scotland. From figure 4.4 we infer that increasing Annuity prices decreases as age increases Also, Scotland increasing Annuity prices are slightly more than the Sweden. In general, females increasing Annuity prices are less expensive than males but converges as age approaches to 90. Conclusions In this dissertation, we set out to show how R packages such as actuar,Mortalitysmooth,MASS can be used to implement aggregate loss claims and human mortality. We used compound distribution to model aggregate claims using actuar and P-splines smoothing techniques to smooth mortality data using Mortalitysmooth package. We finally explained these concepts using real time data such as Danish data and Human Mortality database for Scotland and Sweden and priced life insurance products respectively. In chapter 2 we presented general background to compound distribution in modelling aggregate claim and performed simulation using compound Poisson distribution. Our analysis suggested that Weibull fits the loss claim distribution well using goodness of test fit. Finally we analysed Danish loss insurance data from 1980 to 1990 and used Negative binomial distribution for number of claims and simulated for 1000 samples using Gamma distribution and concluded that Gamma distribution gave a better fit using histogram and chi-square goodness of test fit. In chapter 3 we explained briefly on concepts of graduation, generalised linear models. The smoothening techniques using P-splines were presented and the smoothing parameter was calculated using Bayesian information criterion techniques. We obtained deaths and exposure data from Human Mortality Database for selected countries Sweden and Scotland and implemented mortality rates smoothing using mortalitysmooth package under R. Necessary graphs representing actual data, smoothed mortality data using Bayesian information criteria and smoothing parameter =10000 were presented for the selected countries. We also compared the mortality rates based on various sets such as Males and females for specific country or total mortality rates across countries like Sweden and Scotland for a given time frame ranging age wise or year wise. We finally concluded that mortality rates for Scotland are more than Sweden and in general the mortality rates for males are more than the females. In chapter 4 we looked into various life insurance and pension related products widely used in the insurance industry and constructed life tables and commutation functions to implement annuity values using the smoothed data derived using the methods discussed in chapter 3. We compared and plotted for some of the insurance products and concluded that whole life annuity price decrease as age increases and males annuity prices are more than the females.

Wednesday, May 6, 2020

Comparing the Plays, A Raisin in the Sun and Death of a...

In history there have been an uncountable amount of plays made, but there have only been two that fully captured the American dream like A Raisin in the sun and Death of a Salesman. In both plays the protagonist is trying to achieve the American dream, but it is near impossible when neither of them has the respect of their superiors or the people around them. It is amazing that two different plays can so closely parallel each other when they have a time gap of over 10 years. Both Miller and Lorraine created a theme of achieving goals, Willy Loman just wanted to earn the respect of the people around him while Walter Younger wanted to get rich quick and support his family. American politician Reubin Askew once said, â€Å"We must stop talking†¦show more content†¦But luckily they both have the support of a loving family to help them through it. Ruth Younger was one of the few things that kept Walter sane and their apartment intact, she kept up the apartment and remains emotio nally strong throughout the play, â€Å"goodbye misery! I don’t ever want to see your ugly face again†. A character from â€Å"Death of a Salesmen† that is almost identical to Ruth is Linda Loman. Linda nurtured a hurting family all those times when Willy’s misguided attempts at success miserably failed. She too held together her family with her emotional strength, without her Willy would have broken long before he did in the play. Linda was the one that kept a cool head in heavy situations, when everyone was freaking out she was the one to bring them down to earth. These two women played a huge role in keeping their family together; they knew when the tough times came they were the ones who needed to stay strong. Both plays have a character that gives the families some news they don’t want to hear. In â€Å"A Raisin in the Sun† that character is Mr. Karl Lindner; he informs the Youngers that they are unwanted in a neighborhood that they jus t moved in to. He says that because of their ethnicity they will lower the value of the homes around them. Their excitement from finally buying a house of their own was quickly abolished. Howard Wagner was another prime example of someone that gives bad news, or in this case catastrophic news, he was theShow MoreRelated Comparing the American Dream in Millers Death of a Salesman and Hansberrys A Raisin in the Sun3400 Words   |  14 PagesComparing the Destructive American Dream in Millers Death of a Salesman and Hansberrys A Raisin in the Sun America is a land of dreamers. From the time of the Spanish conquistadors coming in search of gold and everlasting youth, there has been a mystique about the land to which Amerigo Vespucci gave his name. To the Puritans who settled its northeast, it was to be the site of their â€Å"city upon a hill† (Winthrop 2). They gave their home the name New England, to signify their hope for aRead MoreMarketing Management 14th Edition Test Bank Kotler Test Bank173911 Words   |  696 Pagesbusiness market B) global market C) nonprofit market D) consumer market E) exclusive market Answer: C Page Ref: 9 Objective: 2 Difficulty: Easy 19) Which of the following is true of business markets? A) Buyers are usually not skilled at comparing competitive product offerings. B) Buyers have limited purchasing power. C) Property rights, language, culture, and local laws are the most important concerns. D) Products sold in such markets are usually highly standardized. E) Business buyersRead MoreDeveloping Management Skills404131 Words   |  1617 PagesConflict 375 SKILL LEARNING 376 Interpersonal Conflict Management 376 Mixed Feelings About Conflict 376 Diagnosing the Type of Interpersonal Conflict 378 Conflict Focus 378 Conflict Source 380 Selecting the Appropriate Conflict Management Approach 383 Comparing Conflict Management and Negotiation Strategies 386 Selection Factors 386 Resolving Interpersonal Confrontations Using the Collaborative Approach A General Framework for Collaborative Problem Solving 391 The Four Phases of Collaborative Problem Solving