Thursday, October 31, 2019

Introduction to Approaches in Psychology Essay 1

Introduction to Approaches in Psychology 1 - Essay Example Pavlov who trained dogs to salivate in response to two stimuli, noise or light, and food or a sour solution. John B. Watson, considered as the father of behaviourism, conducted an experiment with an 11-month-old child, Albert. He presented the child with a loud frightening bang and a rat at the same time. After six or seven repetitions of the noise and rat together over a week, the child became afraid of the rat, which he was not earlier. 1. Conditioned fear and anxiety – the phobias that many people experience are due to conditioning. We can consider an example of a child and his father travelling in a plane. The father jokes about how the plane can crash. Since he finds it very funny, he decides to say it whenever they travel in a plane. Many years later, even though the child has grown up, he is afraid to travel in a plane. Conditioning has caused the fear. 2. Advertising – In a beer advertisement featuring a young attractive woman wearing a bikini, the woman (US-unconditioned stimulus) evokes a mildly aroused feeling (UR-unconditioned response) in most men. The beer is associated with the classic conditioning effect. (Classical Conditioning, 1996.) Waschulewski-Floruss H, Miltner W, 1994, investigated whether experimental pain responses can be conditioned using auditory stimuli in a differential trace conditioning paradigm in 16 healthy subjects. An intracutaneous electrical stimulus applied to the left middle-finger (10 ms duration) was the UCS. Tones of 1000 and 1400 Hz were used as CS+ and CS-, respectively. A trace conditioning paradigm was used with an 800 ms interval between CS and UCS. Twenty nine electrode sites recorded somatosensory event related potentials (SEP) and auditory event related potentials (AEP). Subjective pain reports were noted. The results revealed significant differences of the subjective sensations between the CS+ and CS-, but not in the amplitudes and latencies of the P50, N100, P200, and P300 AEP

Monday, October 28, 2019

Physiological Ecology Essay Example for Free

Physiological Ecology Essay ABSTRACT   Ã‚   Mytilus edulis or the common mussels, very commonly found around the British Isles coast, with large commercial beds in the Wash, Morecambe Bay, Conway bay the estuaries of south- west England, north Wales west Scotland; belongs to the phylum Mollusca e.g. snails, slugs, mussels cockles clams class Pelecypoda e.g. clams, cockles, mussels, oysters scallops. The Mytilus is an extremely widely studies specie, mainly because of its widespread distribution, abundance, ecological commercial importance. It is also used as a bio – indicator. The objective of the study conducted was to find out the effects of respiration, water pumping activity environmental stresses on the mussel’s growth. The environmental stress includes prolonged air exposure, low salinity its action combined with elevated temperature. The main focus was regarding the age growth of the Mytilus. The mussels were challenged to a number of tests to determine their behaviour to record their response to different environments.   The tests prove that Mytilus species that live in an uncontaminated area grow faster than ones that live in polluted areas. This can be deduced effectively by the research conducted along with the experiments. INTRODUCTION   Ã‚   Mytilus are usually present on the rocky shores of open coasts attached to the rock surfaces in crevices, on rocks piers in sheltered harbours estuaries, often occurring as dense masses in cooler waters of the world; usually extending from the Arctic to the Mediterranean in the North east Atlantic. Two important factors that play an important part in the growth life of Mytilus are: TEMPERATURE: it is a vital factor responsible for the growth limitation of mussels. Extreme low temperature causes damage in Mytilus but is minimised due to nucleating agents in the haemo- lymph. The Mytilus is prone to perilous freezing conditions sporadically in even moderate temperatures; large adults can endure lab conditions of -16 degree C. easily for 24 hours are capable of surviving even if the tissue temperature falls below -10 degree C. In Sweden, mussels actively ingested seston at -10 degree C., suggesting that they can utilise spring phytoplankton blooms in boreal waters even at low temperatures. M.edulis can tolerate high temperature desiccation as well, for example the British M.edulis has an upper sustained thermal tolerance limit of about 29 degree C. (Mytilus edulis) SALINITY: in contrast with other biogenic reef species, M.edulis can bear a wide range of salinity. But it is noted that it stops the feeding process when exposed to low salinities. The M. edulis adapts well to low salinities as low as 4-5 %. Exposure to 16% salinity for a month resulted in reduced shell growth as much as 26% to 32%, while in 22% exposure caused a minute drop in growth rate. When exposed to 13% the growth rate recovered from zero to more than 80% in 32% in a month. MATERIALS AND METHODS Materials: Incubation tubes, incubator, cotton, knob, benzoic acid, All samples were divided into four groups. Two groups of prestine A and prestine B were compared with polluted A and polluted B. Pristine A Pristine B Polluted A Polluted B Curves were drawn to compare Pristine A with Polluted A and Pristine B with Polluted B. With change of temperature change in mass was observed. Mytilus were cultured in flat trays measuring 20-40 cm. Two trays had pristine while remaining two were for polluted growth. Affect of temperature change was observed in all the four trays with consequently change in mass. Mytilus was put over the trays to be cultured. Tests conducted in five different labs are being analyzed to prove that the Mytilus favor a pristine environment as compared to a polluted one. LAB #1    This particular lab deals with the energy content in a food substrate or in animal tissue which is considered as the most important component for growth of any organism. The method used to determine the energy content of biological materials is the micro- bomb calorimetry method; by using susceptible microelectrodes to assess the heat produced by igniting a pellet of dry tissue within a stainless steel bomb. The calibration is obtained through a chemical having fixed energy content; the temperature change can be transformed into energy content for the tissue. In order to deal with a small sample, a micro- bomb calorimeter is used, filled with oxygen a small wire, that works like a light bulb filament is used to ignite the tissue    Using the oxygen supplied by potassium dichromate; a strong oxidizing reagent, contained with concentrated sulfuric acid, the tissue is burnt chemically. The orange Cr is reduced to green Cr, while burning; this change can be quantified using a spectrophotometer. LAB #5:    By determining the effects of geometric constraints biological processes, the allometric isometric relationships of organism are studied. The lab deals with the examination of gill area, shell volume foot weight scale with the size of mussels; observing how the size of the mussel effects the different biological processes. The allometric scaling is explained by equations of the form Y= Ax B; the A as a constant, B an exponent, X is mass Y is a biological process. Allometric relationships are represented as curves on linear axes, but when plotted on log/log axes they become straight. The scaling exponent of the function is determined by the slope of the line. LAB #6:    This lab’s research aims to calculate the following at ambient temperature using a meticulous mode: The respiration rate of one mussel from polluted area The respiration rate of one mussel from a pristine area control respiration    The materials employed in this test are a fiber optic oxygen electrode indicating vestiges on the quenching of light emissions from a Ruthenium compound due to oxygen presence, so as to calculate the flux of oxygen in due course.   To measure the respiration rates, the mussels will be enclosed in individual restrained Respirometers, filled with seawater connected to an oxygen electrode located with a slow flow of water from a peristaltic pump, in a separate chamber. Set up the oxygen system to record data every minute for an hour. Place a cleaned mussel, attach the lid submerge the chamber. Place the electrode in the holder attach hoses to pump chamber, so that the water is flowing past them, turning on the pump to slow. The data logging will go on for an hour start a mark for a downward slop in the recorded readings. Measure the volume of chambers the water level in hoses length of the mussel to estimate the tissue weight Mussel volume to ascertain the exact volume of water in the chamber.   LAB # 7:   Ã‚   The labs main concern was to calculate the protein content in mussel tissues, by using the Lowry chemical assay, which comprises of combining a dye reagent with soluble protein to produce coloration that is directly proportional to the amount of protein present. Protein is often used in physiological ecology as it plays a functional structural role by normalizing the data, through its direct association with functional components within the cells. Often in this experiment, the Bradford assay has been used since it is an alternate method for protein determination. Dilute copper tartar- ate solution is added to the protein that forms a complex. To develop the coloration, the Folin reagent is added to the protein – copper complex, within 15 minutes it results in a blue color. This has a peak absorbance at 750nm can be quantified at this wavelength using a spectrophotometer. A calibration must be done with a known construction of known concentration of protein a calibrated line constructed.   Ã‚   The reagents in the assay when reacted with a series of known protein solution (0.2- 1.5 mg/ml) dissolved in a sodium oxide buffer to remove buffer effects in the calibration. Prepare a series of clean 2ml snap cap tubes. The likely concentration series will be made by diluting the stock Bovine Serum Albumin from concentrations stock: –x x/10 x/2 x/4 3x/4   Ã‚  Ã‚  Ã‚  Ã‚   Into the 1.7 ml soap cal tubes, transfer 25ul of the standards then add 125ul of reagent A. swirl warily. In each tube add 1.0 ml of reagent B vortex carefully. Leave for 15 minutes then measure the absorbance against 750nm distilled water. Plot the protein content along the X axis the absorption along the Y axis to obtain the calibration line. The calculation of the calibrated line can be done to estimate the protein content X from an unknown absorption Y; in the form Y= A – BX LAB # 9:   Ã‚  Ã‚   This lab research is to study the functional attributes of living enzymes, employing a quantitative approach to their measurement. By using a simple spectrophotometric assay to quantify the enzyme citrate synthase in two populations of Mytilus, any possible consequences of this variation will be identified by its functional value. The enzyme Citrate Synthase limits the rate mediating the transfer of pyruvate into the TCA cycle as citric acid. The process determines: Quantification of CS activity Quantification of the protein content to allow the CS content to be normalized. The extraction of the living tissue in a way that the enzymes remain operative is the base, on which the reaction is dependant on. The DTNB is reduced by the CoASH which is a stiochiometric by product of the reaction. The DTNB changes color, as is reduced with a peak absorbance of 412um. The procedure relies on the extraction of the CS in a cold buffer. A small portion is diluted with an Acetyl-CoA solution, the reaction begins when the Oxalo- acetate solution is added, as a result the color changes which can be monitored in a spectrophotometer. RESULTS      Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Results clearly show that mytilus grow more in pristine as compared to polluted areas. There are several factors that affect mytilus growth in polluted areas. Graph polluted A (obtained from polluted A readings) Lab 6 The threshold salinity levels were recorded for the individual age groups consisting of a variation of behavioural response to salinity fluctuations. Low levels of water salinity below the critical values caused the isolating responses like closing the mantle cavity, withdrawal of siphons closing the shell valves in Mytilus. Another factor noticed was that the age did not influence the sensitivity of mussels to low salinity elevated temperature. However the older mussels exhibited a slightly lower critical salinity value after going through the fluctuations.   Ã‚   The scope for mussel growth except under treatments of no algae high silt; remained positive when carbon assimilation true, the rates of respiration excretion were balanced against energy intake. In estuarine systems, where the seston quality quantity is variable, makes the mussels living there evolve a feeding strategy involving minimal metabolic cost, at the same time maximizes energy assimilation while acquiring food from the environment. DISCUSSION AND CONCLUSIONS      Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   A number of factors can hinder growth of mytilus in polluted areas. In polluted areas the change in mass of mytilus was much greater with slight variations of temperature. However, contrary to this the change in mass was negligible in pristine area. Several factors can hinder growth of mytilus on polluted surface. Pollutant in water and air can hinder their growth. Pollutants also destroy the food stuff and nutrients, hence, the mytilus species may find difficulty in getting well nourishment. Environmental variations have also deep affect on their growth. The blue mussels can subsist in air for 10 14 days at a varying temperature from 10 -20 degree C. even longer at lower temperatures. Like many other intertidal mollusc, M. edulis uses a complex behavioural physiological bio chemical mechanism to tolerate prolonged periods of air exposure extreme salinity changes or other un- favourable environmental conditions. Mussels that are smaller medium in size are not as predisposed to air exposure unlike large mussels, mainly because of higher absolute values of metabolic rate in the large mussels. In our experimental research, the size did not play a role in survival in air. The factors change from specie to specie, for example in some species of mussels the resistance increases the developmental age of the animal, and once it reaches the maximum level it may be possible that the process reverses.   Ã‚   When blue mussels M. edulis were exposed to high concentrations of copper Antarctic scallop Adamussium colbecki to high concentrations of cadmium, the age factor did not influence the survival; however the capacity to convalesce deteriorates with age.   Ã‚  The physiological traits of food ingestion rate, carbon assimilation efficiency, and respiration excretion rates are integrated by the energy accessible for growth, by supplying a prompt quantitative estimation of the energy status of the mussels. Conducting researched on this fact can provide insight into the growth process the influence of physiological activities. The Geukensia demissa or commonly known as the ribbed mussels can exert a profound influence on ecological processes of salt marshes on the Atlantic coast of North America. These mussel species are quite vulnerable to predators in the sub tidal area, since they have relatively thin shells; however they are very much physiologically adapted to the extreme environment where they are exposed to 70% air of the tidal cycle, this exposure draws the mussels against some severe stress since they are unable to perform feeding, defecation other essential physiological functions due to limitation of time. The mussels favour a pristine environment over REFERENCES â€Å"Mytilus edulis† Environmental Requirements: (n.d.) UK marine special areas of conservation [Accessed 4 December 2007] http://www.ukmarinesac.org.uk/communities/biogenic-reefs/br3_4.htm Tyler-Walters, H., 2007. Mytilus edulis. Common mussel. Marine Life Information Network: Biology and Sensitivity Key Information Sub-program [14 September 2007] Plymouth: Marine Biological Association of the United Kingdom. [Accessed 4 December 2007] http://www.marlin.ac.uk/species/Mytilusedulis.htm Sukhotin, A.A. Lajus, D.L. Lesin P.A. (28 October 2002) Influence of age and size on pumping activity and stress resistance in the marine bivalve Mytilus edulis L: Journal of Experimental Marine Biology and Ecology [Accessed 4 December 2007] 284: 129– 144 http//:www.elsevier.com/locate/jembe Huang, S. C. Newell, R.I.E. (5 February 2002) Seasonal variations in the rates of aquatic and aerial respiration and ammonium excretion of the ribbed mussel, Geukensia demissa (Dillwyn) Journal of Experimental Marine Biology and Ecology [Accessed 4 December 2007]270: 241– 255 http//:www.elsevier.com/locate/jembe Eder1, E. B. Lewis, M. N. (28 April 2005) Proximate composition and energetic value of demersal and pelagic prey species from the SW Atlantic Ocean: MARINE ECOLOGY PROGRESS SERIES [Accessed 4 December 2007]Vol. 291: 43–52, Arifin, Zainal. Leah I. Bendell-Young (27 March 2001) Cost of selective feeding by the blue mussel / Mytilus trossulus as measured by respiration and ammonia excretion rates: Journal of Experimental Marine Biology and Ecology [Accessed 4 December 2007] 260 259–269 http//:www.elsevier.nlrlocaterjembe

Saturday, October 26, 2019

VaR Models in Predicting Equity Market Risk

VaR Models in Predicting Equity Market Risk Chapter 3 Research Design This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models. 3.1. Data The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007. The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time. 3.1.1. FTSE 100 index The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator. In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009. 3.1.2. SP 500 index The SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy. Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days). 3.2. Data Analysis For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics. 3.2.1. Assumptions 3.2.1.1. Normality assumption Normal distribution As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution. Figure 3.1: Standard Normal Distribution Skewness The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns. Figure 3.2: Plot of a positive or negative skew Kurtosis The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3. Figure 3.3: General forms of Kurtosis Jarque-Bera Statistic In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as: where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom. Augmented Dickey–Fuller Statistic Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674. 3.2.1.2. Homoscedasticity assumption Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable. Figure 3.4: Plot of Homoscedasticity Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time. 3.2.1.3. Stationarity assumption According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them. One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,†¦, and any time interval the joint distribution of the returns ,†¦, is the same as the joint distribution of returns ,†¦,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return. 3.2.1.4. Serial independence assumption There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods. The results can range from  +1 to -1. An autocorrelation of  +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series). In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags. The Ljung-Box test can be defined as: where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the ÃŽ ± is the quantile of the Chi-square distribution with h degrees of freedom. 3.2.2. Data Characteristics Table 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009. DIAGNOSTICS SP 500 FTSE 100 Number of observations 1774 1781 Largest return 10.96% 9.38% Smallest return -9.47% -9.26% Mean return -0.0001 -0.0001 Variance 0.0002 0.0002 Standard Deviation 0.0144 0.0141 Skewness -0.1267 -0.0978 Excess Kurtosis 9.2431 7.0322 Jarque-Bera 694.485*** 2298.153*** Augmented Dickey-Fuller (ADF) 2 -37.6418 -45.5849 Q(12) 20.0983* Autocorre: 0.04 93.3161*** Autocorre: 0.03 Q2 (12) 1348.2*** Autocorre: 0.28 1536.6*** Autocorre: 0.25 The ratio of SD/mean 144 141 Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. 2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158 Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009 Figure 3.5b: The SP 500 daily returns from 05/06/2002 to 22/06/2009 Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.6b: The SP 500 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.7b: Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8b: Diagram showing the SP 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates. Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail). Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present. The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009. Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility). In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary. Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence. Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figure 3.9b: Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write: Corr(Rt+1,Rt+1-ÃŽ ») ≈ 0, for ÃŽ » = 1,2,3†¦, 100 Therefore, returns are almost impossible to predict from their own past. One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags. Corr(R2t+1,R2t+1-ÃŽ ») > 0, for ÃŽ » = 1,2,3†¦, 100 Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns Figure 3.10b: Autocorrelations of the SP 500 squared daily returns 3.3. Calculation of Value At Risk The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution. Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data. The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns. Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4. 3.3.1. Components of VaR measures Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent. The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007. 3.3.2. Calculation of VaR 3.3.2.1. Non-parametric approach Historical Simulation As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section. The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days. In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes. After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns. For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively. Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007 Figure 3.11b: Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007 Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4. 3.3.2.2. Parametric approaches under the normal distributional assumption of returns This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4. 3.3.2.2.1. The RiskMetrics Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor ÃŽ »=0.94 (the RiskMetrics system suggested using ÃŽ »=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly. After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV. 3.3.2.2.2. The Normal-GARCH(1,1) model For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below). Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500 Normal-GARCH(1,1)* Parameters FTSE 100 SP 500 0.0955952 0.0555244 0.8907231 0.9289999 0.0000012 0.0000011 + 0.9863183 0.9845243 Number of Observations 1304 1297 Log likelihood 4401.63 4386.964 * Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%. According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance. The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly. After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV. 3.3.2.2.3. The Student-t GARCH(1,1) model Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3). Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500 Student-t GARCH(1,1)* Parameters FTSE 100 SP 500 0.0926120 0.0569293 0.8946485 0.9354794 0.0000011 0.0000006 + 0.9872605 0.9924087 Number of Observations 1304 1297 Log likelihood 4406.50 4399.24 * Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%. The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model. 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4. 3.3.2.3.1. The CFE-modified RiskMetrics Similar VaR Models in Predicting Equity Market Risk VaR Models in Predicting Equity Market Risk Chapter 3 Research Design This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models. 3.1. Data The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007. The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time. 3.1.1. FTSE 100 index The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator. In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009. 3.1.2. SP 500 index The SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy. Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days). 3.2. Data Analysis For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics. 3.2.1. Assumptions 3.2.1.1. Normality assumption Normal distribution As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution. Figure 3.1: Standard Normal Distribution Skewness The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns. Figure 3.2: Plot of a positive or negative skew Kurtosis The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3. Figure 3.3: General forms of Kurtosis Jarque-Bera Statistic In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as: where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom. Augmented Dickey–Fuller Statistic Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674. 3.2.1.2. Homoscedasticity assumption Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable. Figure 3.4: Plot of Homoscedasticity Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time. 3.2.1.3. Stationarity assumption According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them. One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,†¦, and any time interval the joint distribution of the returns ,†¦, is the same as the joint distribution of returns ,†¦,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return. 3.2.1.4. Serial independence assumption There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods. The results can range from  +1 to -1. An autocorrelation of  +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series). In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags. The Ljung-Box test can be defined as: where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the ÃŽ ± is the quantile of the Chi-square distribution with h degrees of freedom. 3.2.2. Data Characteristics Table 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009. DIAGNOSTICS SP 500 FTSE 100 Number of observations 1774 1781 Largest return 10.96% 9.38% Smallest return -9.47% -9.26% Mean return -0.0001 -0.0001 Variance 0.0002 0.0002 Standard Deviation 0.0144 0.0141 Skewness -0.1267 -0.0978 Excess Kurtosis 9.2431 7.0322 Jarque-Bera 694.485*** 2298.153*** Augmented Dickey-Fuller (ADF) 2 -37.6418 -45.5849 Q(12) 20.0983* Autocorre: 0.04 93.3161*** Autocorre: 0.03 Q2 (12) 1348.2*** Autocorre: 0.28 1536.6*** Autocorre: 0.25 The ratio of SD/mean 144 141 Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. 2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158 Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009 Figure 3.5b: The SP 500 daily returns from 05/06/2002 to 22/06/2009 Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.6b: The SP 500 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.7b: Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8b: Diagram showing the SP 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates. Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail). Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present. The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009. Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility). In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary. Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence. Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figure 3.9b: Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write: Corr(Rt+1,Rt+1-ÃŽ ») ≈ 0, for ÃŽ » = 1,2,3†¦, 100 Therefore, returns are almost impossible to predict from their own past. One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags. Corr(R2t+1,R2t+1-ÃŽ ») > 0, for ÃŽ » = 1,2,3†¦, 100 Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns Figure 3.10b: Autocorrelations of the SP 500 squared daily returns 3.3. Calculation of Value At Risk The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution. Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data. The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns. Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4. 3.3.1. Components of VaR measures Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent. The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007. 3.3.2. Calculation of VaR 3.3.2.1. Non-parametric approach Historical Simulation As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section. The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days. In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes. After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns. For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively. Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007 Figure 3.11b: Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007 Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4. 3.3.2.2. Parametric approaches under the normal distributional assumption of returns This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4. 3.3.2.2.1. The RiskMetrics Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor ÃŽ »=0.94 (the RiskMetrics system suggested using ÃŽ »=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly. After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV. 3.3.2.2.2. The Normal-GARCH(1,1) model For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below). Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500 Normal-GARCH(1,1)* Parameters FTSE 100 SP 500 0.0955952 0.0555244 0.8907231 0.9289999 0.0000012 0.0000011 + 0.9863183 0.9845243 Number of Observations 1304 1297 Log likelihood 4401.63 4386.964 * Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%. According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance. The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly. After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV. 3.3.2.2.3. The Student-t GARCH(1,1) model Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3). Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500 Student-t GARCH(1,1)* Parameters FTSE 100 SP 500 0.0926120 0.0569293 0.8946485 0.9354794 0.0000011 0.0000006 + 0.9872605 0.9924087 Number of Observations 1304 1297 Log likelihood 4406.50 4399.24 * Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%. The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model. 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4. 3.3.2.3.1. The CFE-modified RiskMetrics Similar

Thursday, October 24, 2019

British-Chinese Relations in the Nineteenth Century and Alicia Bewicke

British-Chinese Relations in the Nineteenth Century and Alicia Bewicke Little's Novel, A Marriage in China The year was 1842, and Britain had just finished a successful military campaign in China, a campaign that also signified a rather humiliating defeat for the Chinese army. The first Opium War reestablished Britain's profitable opium trade routes from India to China, and also established a new mode of British-Chinese relations, one that resulted in British control of the new colony of Hong Kong and semicolonial control over various treaty ports. The progressive optimism that this combined political and economic control seemed to herald for the British Empire was reflected in a piece in the newly established Illustrated London News: A large family of the human race, which for centuries has been isolated from the rest, is now about to enter with them into mutual intercourse. Vast hordes of populations, breaking through the ignorance and superstition which has for ages enveloped them, will now come out into the open day, and enjoy the freedom of a more expanded civilization, and enter upon prospects immeasurably grander. (Illustrated London News, qtd. in Thurin 1) Voiced at mid-century, this statement paradoxically depicts the Chinese as both "enveloped" by backwardness, yet capable of reform and progress; as ignorant, superstitious, and characterized as pest-like "vast hordes of populations," yet also seen as equal partners with the British in a "mutual intercourse." This varied and contradictory opinion could just have easily been voiced at the end of the nineteenth century. In a way, this statement can be seen as representative of the history or, more accurately, the story of the relationship between ... ...sm. Thus even areas of resistance are encoded within the text of compliance (Rule Britannia: Women, Empire, and Victorian Writing, Ithaca: Cornell UP, 1995). Works Cited British Library Public Catalogue. 12 Oct 1999 . Campbell, Mrs. J. Weston. (Signed C. de Thierry.) "The Sons of Han [Chinese Emigration]." Macmillan's Magazine. 80 (May 1899): 58-66. Croll, Elisabeth J. Wise Daughters from Foreign Lands: European Women Writers in China. London: Pandora, 1989. Little, Alicia Bewicke. A Marriage in China. London: F. V. White & Co., 1896. Round about My Peking Garden. London: T. Fisher Unwin, 1905. "Mrs. A. Little." Obituary. The London Times. 6 Aug. 1926: 17e. Research Library Group (RLG) Union Catalog (RLIN). 12 Oct 1999 . Thurin, Susan Schoenbauer. Victorian Travelers and the Opening of China, 1842-1907. Athens, OH: Ohio UP, 1999.

Wednesday, October 23, 2019

The Representation of Tricksters in the Works of Charles W. Chesnutt

Fraud, con-man, and hustler are all modern day terms to describe the age old character in African American literature known as the trickster. Today’s working definition of a trickster is one who swindles or plays tricks; often a mischievous figure in myth or folklore, who typically makes up for physical weakness through cunning and subversive humor. In African American literature the role of the trickster is a reoccurring theme, especially in the time period spanning from post Civil War to the Harlem Renaissance.During slavery and the years that followed the image of a trickster changed from a humorous amoral figure to a cunning and socially conscious icon. Charles W. Chesnutt is a primary example of an author, who faithful employs the trickster motif in many of his published works. Traditionally, the role of trickster often presents itself when there is a powerless group who longs to transcend an oppressive social order (Jefferies, Schramm 20). In African American literature, the trickster is often depicted as someone who has the ability to manipulate situations in his/her favor, despite having little or no power.Rhonda B. Jefferies states that â€Å"the primary goal of the trickster in is social nonconformity by redefining the norms of life and existence in mainstream American society (Jefferies, Schramm 20). † Since its origin in West African culture, the trickster figure has evolved from a folklore icon, mainly in the form or various animals, to an archetype whose behavior is both contradictory and complex. The tricksters reoccurring appearance in African American folklore, narratives, poems, novels and pop culture is no coincidence.It is the trickster’s pursuit of wisdom, cunning or power in an attempt to redefine social order that makes him/her such an attractive icon. The trickster character serves as an inspirational figure for the socially oppressed and has takes on many forms when expressed in past and present literature. Many Afr ican American folk tales, especially those from southern United States, include the appearance of a trickster. In â€Å"Brer Rabbit Tricks Brer Fox Again,† the trickster takes on a classic form of a clever but lazy rabbit.In this tale the rabbit becomes stuck in a well and finagle his way out by convincing the suspicious fox to help him escape. He manipulates the fox to get into the well under false pretenses. By convincing Brer Fox that there is an abundance of fish he needs help catching and transporting out of the well, Brer rabbit was able to leverage an escape, consequentially leaving the fox in his place. It is the rabbit’s quick wit that makes him a quintessential trickster figure in many folk tales across a number of cultures. However, Brer Rabbit is just one of many depictions of a trickster rabbit in folk tales and stories throughout history.A more modern depiction of a rabbit trickster is Looney tunes’ Bugs bunny. The ways in which Bugs utilizes his p hysical endurance and mastery of disguise to deceive his arch enemy Elmer Fudd is a playful interpretation when compared to those in African American literature and folklore. The integration of the trickster in modern culture, whether it be in the form of animal or man, is just one demonstration of the many ways in which this popular character transcends time and culture, to eventually become one of the most reoccurring archetypes in African American literature.Charles W. Chesnutt’s relationship with the trickster archetype is most evident in his collection of short stories with the characterization of Uncle Julius. Uncle Julius appeared in seven of the thirteen short stories that make up Chesnutt’s The Conjured Women. In the collection of stories, Uncle Julius often â€Å"conjures† up his tales from old folklore, in an attempted to persuade or manipulate certain situations to his benefit.The description of Uncle Julius interaction with the John and Annie, the n orthern white couple interested in buying the grape vineyard Julius inhabits, in the â€Å"The Goophered Grapevine,† is a classic example of Chesnutt’s employment of the trickster motif. From Uncle Julius first impression, the audience is under the impression that Julius presence is to provide theatric and entertainment rather than fact or insight. His â€Å"performance† begins with the eating of the scuppernong grapes and ends with his fantastical account on the vineyard came to be bewitched.John, the white northern gentlemen interested in buying the vineyard, is instantly skeptical upon meeting Uncle Julius disregard Uncle Julius account by stating â€Å"At first the current of his memory –or imagination- seemed somewhat sluggish; but as his embarrassment wore off, his language flowed more freely, and the story acquired more perspective and coherence (Chesnutt 607). † The use of the word â€Å"imagination† is a clear indicator that Uncle J ulius is believed to be telling fiction. John goes on to further prove his disbelief when he goes against Uncle Julius’ suggestion and buys the vineyard, and later makes a considerable profit off.John however does take sympathy for the man who had lived and profited off the land and hired him as a coachman. While Uncle Julius is one of Chesnutt’s more memorable characters, he is by no means the only representation of the trickster motif in Chesnutt’s works. Grandison, from â€Å"The Passing of Grandison† is another example of a trickster character from Chesnutt’s collection entitled The Wife of his Youth and Other Stories of the Color Line. In this story, Grandison is a slave from a plantation in Kentucky, who successfully deceives his masters, Colonel and Dick Owens, on a number of occasions.His first act of trickery is when he is being questioned by his old master by assuring Colonel Owens of his contentment on the plantation and his disgust with the anti-slavery ideals of northern abolitionist. Colonel Owens’ intentions were to select a slave his son could bring up north, who had proven to be resistant to abolitionist ideals and the prospect of running away. To Colonel Owens elation, Grandison’s answers not only confirmed his view of a mutually benefits of slavery but went above and beyond to demonstrate a conceivably genuine appreciation of the resources and lifestyle on the plantation.He went on questioned Grandison about the fairness of his treatment and the kindness of his master before promising him a bead necklace for his future wife and deeming him â€Å"abolitionist-proof. † Although the interaction described was only a brief portion of the story it proves to be a pivotal moment in the plot and leaves the audience to assume that Grandison is loyal slave with no intention of running away. But, as we later find out, Grandison was not at all ignorant to the ideals of abolitionism and actually aspi red to be a free man.He eventually achieves his goal as we see in the very last chapter but not without an unexpected twist Grandison then goes on to successfully deceive his young master, Dick Owens, and forges his loyalty several times during their travels to New York, Boston, and eventually Canada. Throughout the journey, Dick Owens provides the Grandison with a number of opportunities to escape by leaving him alone on many occasions and supplying him with money that he could easily utilize to run away. Once Dick Owens realizes Grandison too dense to run away, or so he thinks, he silicates the help of local abolitionist, by writing an nonymous letter. However, Grandison unwaveringly loyal puts a quickly sidetracks Owens ploy to liberate his father’s slave. Day after day Grandison continues report to his young master every morning and night, leaving Owen to pursue more drastic measures. So, Dick Owens decides to leave Grandison alone for a couple of days, with one hundred d ollars to his disposal, in a sly attempt to get Grandison to runaway. Upon his return, Dick Owens finds his efforts were unsuccessful, and with much frustration and annoyance decides to take one last attempt by venturing to Canada, where slaves are free.Nevertheless, Grandison faithfully follows his master orders and does not attempt to runaway, despite the fact there are no laws binding Grandison to Dick Owens in Canada. At this point, the young master decides to gives up his efforts and solicits three men to kidnap Grandison. During this exchange Owens escapes and return to Kentucky alone. Dick Owens concludes that Grandison is too ignorant to recognize his opportunity for freedom and goes on to marry the motive behind his attempt at nobility, Charity Lomax.Once again it is not until the final chapter that the audience learns; it was Dick Owens and his father who proved to be most ignorant. In the final chapter, Grandison surprisingly returns to the plantation tattered and exhaust ed from his journey back to Kentucky. He recounts his story of being gagged and dragged to the â€Å"gloomy depth of a Canadian forest,† where he was locked in a hut and given only bread and water. He appeases his curious spectators by ending his story with his heroic escape and return to the plantation, all the while never revealing his true motives.It is not until Grandison, along with his new wife, family and friends disappears that his intentions to liberate true intentions are revealed. Once thought to be a model servant, blinded by his obedience and loyal dependence, Grandison outsmarts both his masters, by playing into slaveholder stereotypes and common misperception of the south. Grandison’s successful escape with family and friends exposes him as the true trickster. Because of his convincing portrayal of an ignorant and content slave, and willful patients he was ultimately able to turn the tables on his masters and end up the victor in an unlikely turn of even ts.When comparing the presence of the trickster in â€Å"The Goophered Grapevine† and â€Å"The Passing of Grandison,† there are few parallels between the two stories. The general theme of a southern black man deceiving his white superiors apparent in both, but the similarities stop. The two main stories are vastly different in respects to how each trickster if portrayed. Because the audience in â€Å"The Goophered Grapevine† is warned very on early on to be wary of Uncle Julius credibility, he is at a disadvantage.However, Grandison has a very different introduction because he first enters under the impression that he is one of the most loyal and trustworthy slave on the plantation. On the one hand, we have Uncle Julius Characterization as suspicious figure throughout the story from beginning to end, and on the other hand there is Grandison, who appears to be a very pious, simple minded slave with no ulterior motives. Another difference between the two stories is that theatricality proves to be Uncle Julius main downfall, while somehow becoming Grandison’s greatest asset.The introduction and characterization, of Uncle Julius and Grandison, manipulates the audience perception and ultimately determines their success in deceiving and manipulating their audience for their own personal benefit. The trickster, whether presented in modern cartoon or in tradition folktale, is an archetype that continues to reappear in art and literature. The classic depiction of a trickster as a rabbit in old folklore and myths while common is not the only form a trickster may take.Overtime and across cultures, the definition of a trickster changes, but not so much so that it unable to provide an entertaining lesson. The appeal of the trickster to African American writers is the theme of an oppressed group overcoming the challenges of social norms. Charles W. Chesnutt is a prime example of the trickster’s mass appeal, in African American literature. By representing the trickster as Uncle Julius and Grandison in â€Å"The Goophered Grapevine† and â€Å"The Passing of Grandison,† Chesnutt adds to the long history of the trickster as an icon.Work Cited 1. Chesnutt, Charles W. â€Å"Literature Of The Reconstruction To The New Negro Renaissance, 1865-1919. † The Norton Anthology of African American Literature. Ed. Henry L. Gates and Nellie Y. McKay. 2nd ed. New York, Ny: W. W. Norton &, 2004. 604-12. Print. 2. Schramm, Susan L. , and Rhonda B. Jeffries. â€Å"African American Trickster Representations in the Work of Romare Bearden. † JSTOR. JSTOR, Sept. 2000. Web. 29 Nov. 2010. http://www. jstor. org/stable/3193835

Tuesday, October 22, 2019

Edward and Sarah Bishop of the Salem Witch Trials

Edward and Sarah Bishop of the Salem Witch Trials Edward Bishop and Sarah Bishop were tavern keepers that were arrested, examined, and imprisoned as part of the Salem witch trials of 1692. At the time, Edward was about 44 years old and Sarah Wildes Bishop was about 41 years old. There were three or four Edward Bishops living in the area at that time. This Edward Bishop seems to be the one who was born on April 23, 1648.  However, Sarah Bishops year of birth is not known. Note: Bishop is sometimes spelled Bushop or Besop in the records. Edward is sometimes identified as Edward Bishop Jr. Sarah Wildes Bishop was the stepdaughter of Sarah Averill Wildes who was named as a witch by Deliverance Hobbs and executed on July 19, 1692. Bridget Bishop is usually credited with running a tavern that was something of a town scandal, but it was more likely Sarah and Edward Bishop who ran it out of their home. The Background of Edward and Sarah Edward Bishop may have been the son of Edward Bishop, the husband of Bridget Bishop. Sarah and Edward Bishop were the  parents of twelve children. At the time of the Salem witch trials, an older Edward Bishop also lived in Salem. He and his wife Hannah signed a petition protesting the accusations against Rebecca Nurse.  This Edward Bishop seems to have been the father of the Edward Bishop married to Bridget Bishop, and thus the grandfather of the Edward Bishop married to Sarah Wildes Bishop. Victims of the Salem Witch Trials Edward Bishop and Sarah Bishop were arrested on April 21 of 1692 with Sarahs stepmother Sarah Wildes, William and Deliverance Hobbs, Nehemiah Abbott Jr., Mary Easty, Mary Black and Mary English. Edward and Sarah Bishop were examined on April 22 by magistrates Jonathan Corwin and John Hathorne, on the same day as Sarah Wildes, Mary Easty, Nehemiah Abbott Jr., William and Deliverance Hobbs, Mary Black, and Mary English. Among those who testified against Sarah Bishop was the Rev. John Hale of Beverly. He outlined accusations from a neighbor of the Bishops that she did entertain people in her house at unseasonable hours in the night to keep drinking and playing at shovel-board whereby discord did arise in other families and young people were in danger to be corrupted. The neighbor, Christian Trask, wife of John Trask, had attempted to reprove Sarah Bishop but received no satisfaction from her about it.  Hale stated that Edward Bishops would have been a house if great profaneness and iniquity if the behavior had not been stopped. Edward and Sarah Bishop were found to have committed witchcraft against Ann Putnam Jr., Mercy Lewis, and Abigail Williams. Elizabeth Balch, wife of Benjamin Balch Jr., and her sister, Abigail Walden, also testified against Sarah Bishop, claiming they heard Edward accuse Elizabeth of entertaining Satan at night. Edward and Sarah were jailed in Salem and then in Boston, and their property was seized. They escaped from the Boston jail for a short time. After the Trials After their trial their son, Samuel Bishop recovered their property. In a 1710 affidavit attempting to gain recompense for the damages theyd suffered and to clear their names, Edward Bishop said they were prisnors for thirtiey seven wekes and required to pay ten shillings pur weeake for our bord plus five pounds. The son of Sarah and Edward Bishop Jr., Edward Bishop III, married Susannah Putnam, part of the family who had leveled many of the accusations of witchcraft in 1692. In 1975 David Greene suggested that the Edward Bishop accused - with his wife Sarah - was not related to Bridget Bishop and her husband, Edward Bishop the sawyer, but was the son of another Edward Bishop in town.

Monday, October 21, 2019

Dialectical Behaviour Therapy

Dialectical Behaviour Therapy Dialectical Behaviour Therapy (DBT) is a multimodal treatment model developed by Marsha Linehan, an American clinical psychologist, in the early 1980s (Palmer, 2002; Neacsiu, Ward-Ciesielski, Linehan, 2012; Linehan, 1993a).Advertising We will write a custom research paper sample on Dialectical Behaviour Therapy specifically for you for only $16.05 $11/page Learn More DBT is based on the principles and strategies of an early treatment known as cognitive-behavioural therapy (CBT) and was originally intended for individuals presenting with suicidal symptoms (Linehan, 1993b). DBT replaced CBT in the treatment of suicidal individuals because the latter’s strategies had failed since most clients responded to treatment by withdrawing or attacking the therapist. Moreover, counselor experienced a variety of difficulties with the use of CBT in treating suicidal clients, especially, the competing priorities concerning the need to decrease suicidality, teaching the clients new behavioral skills, and managing distress among the clients (Neacsiu et al., 2012; Linehan, 1993b). DBT is designed to provide a comprehensive approach to treating suicidal clients, such as people who are suffering from borderline personality disorder (BPD). The treatment is based on theoretical premises and the principles of validation, acceptance, and mindfulness (Feigenbaum, 2007; Swales, 2009; Wix, 2003). Apart from borderline personality disorder (BPD), DBT has also been used in the treatment of anxiety, posttraumatic stress disorder (PTSD), self-destructive behaviors, impulsive behaviors, and alcohol/drug problems (Feigenbaum, 2007). This paper summarizes different aspects of DBT the major concepts and premises, means of assessment, goal setting, processes, and applicability. The Philosophical Elements of DBT There are three major philosophical elements that underlie DBT including dialectics, Zen Buddhism, and behavioural science (Palmer, 2002; Neacsiu et al., 2012). The basic element of DBT in relation to behavioural science is cognitive-behavioural therapy. DBT is based on the therapeutic techniques of CBT, which emphasize on self-monitoring and collaborative efforts from both the client and the therapist (Palmer, 2002). However, it is important to note that some aspects of CBT are not effective in addressing the problems present among suicidal clients. In order to address the challenges associated with behavioural science approaches such as CBT in the treatment of individuals with BPD, the DBT approach incorporates the principles of Zen.Advertising Looking for research paper on psychology? Let's see if we can help you! Get your first paper with 15% OFF Learn More Here, the Zen philosophy encourages people to forego their ideas regarding their view of reality, and focus on the current moment through acceptance, self-validation, and tolerance. From the Zen perspective, the world is perfect as it is and each moment of reality should be accepted without judgment (Robins et al., 2004). This principle is incorporated into the mindfulness component of DBT, which emphasizes on the need for patients to intentionally live in the present moment without judging, rejecting, or getting attached to the moment (Linehan, 1993b). On the other hand, counselors and their BPD clients will recognize that putting acceptance and change together in any treatment session is very difficult. Therefore, DBT incorporates the dialectical element, which emphasizes on the importance of synthesizing opposites. Here, dialectics is a very important component of DBT, especially in dealing with multi-problem clients, because it allows them to initiate behavioural change by means of acceptance and self-validation (Neacsiu et al., 2002). The dialectical philosophy encourages clients to take a holistic look at different perspectives, encapsulate opposites, and move on continuously. Dialectical Behaviour Therapy Assumptions Alongs ide the philosophical elements of DBT identified in the foregoing discussions, the treatment involves a number of assumptions about the client and the counselor. On the part of the client, DBT assumes that the clients are capable of doing their best to solve their problems. Furthermore, the therapist assumes that the clients cannot fail in the DBT treatment. Additionally, there is the assumption that the clients may not be the source of all the problems they are facing, but they must be willing to solve them either way. Most importantly, the therapy assumes that the lives of people suffering from BPD are unbearable, and therefore, the clients must be motivated to change their lives for the better (Neacsiu et al., 2012; Kliem, Kroger, Kosfelder, 2010). About the therapist and the therapy itself, it is presumed that the counselor has a responsibility in caring and helping the clients to change their ways and achieve their goals in life. Moreover, the therapist must conduct DBT with u tmost clarity, precision, and compassion while ensuring that the therapeutic relationship with the clients exemplifies a real interaction between equals.Advertising We will write a custom research paper sample on Dialectical Behaviour Therapy specifically for you for only $16.05 $11/page Learn More Further, it is presumed that the principles of behaviour apply to the therapist in the same way they apply to clients. On the other hand, the therapist needs support in caring and helping the clients suffering from BPD. Other overarching assumptions imply that the therapist can fail in delivering DBT and the treatment can fail even without the therapists fault (Linehan, 1993a; Miller, Rathus, Linehan, 2007). Theoretical Foundations of DBT The biosocial theory of borderline personality disorder informs the dialectical behaviour therapy. The theory hypothesizes that BPD arises when an emotionally-vulnerable person interacts with an invalidating environment (Lineha n, 1993a; Swales, 2009; Palmer, 2002). Emotional vulnerability refers to instances where an individual reacts excessively to less harmful stressors and takes long time to regain baseline emotions after the stressor is removed (Swales, 2009). When a growing child is exposed to an invalidating environment, it is probable that s/he will not have the opportunity to understand his/her personal feelings and experiences. Besides, this child may not be able to relate his/her responses to particular events in the real world as he/she lacks the support of the others in solving or coping with difficult/stressful situations, most especially when none acknowledges the problems associated with such situations (Huss Baer, 2007). Consequently, the child turns to the immediate environment for answers regarding his/her problems, but the environment may limit the demands that she makes on the significant others. In the long run, the child may begin hoping to gain acceptance from others as well as dev elop extreme emotions to make others to acknowledge her experiences. An obvious consequence of this erratic emotional response is that it prevents the child from understanding and controlling specific emotions. Furthermore, the child may not achieve emotion modulation because the situation does not allow her to acquire the necessary skills (Crowell et al., 2009). Overall, the child slips into a state of emotional dysregulation, which upon interacting with the invalidating environment, leads to Borderline Personality Disorder (BPD). Therefore, based on this theory, DBT is designed to help borderline patients to recover emotional control under validating environments.Advertising Looking for research paper on psychology? Let's see if we can help you! Get your first paper with 15% OFF Learn More Means of Assessment/BPD Diagnosis Emotional dysregulation in a growing child can interact with the invalidating environment to cause BPD. As a result, emotional dysregulation is a central feature in the development of BPD (Neacsiu et al., 2012; Linehan, 1993a; Linehan, Bohus, Lynch, 2007). From a DBT perspective, emotions are considered as complex involuntary responses to both the internal and external environments. In DBT, such emotional responses are categorized into various subsystems, important to treatment because they are the main targets of different emotion regulation processes of DBT. These subsystems include emotional vulnerability to cues, triggers, response tendencies, nonverbal and verbal expressive actions/responses, and secondary emotions, which interact in different ways to produce a particular pattern of emotions. Therefore, one can assess emotion dysregulation by examining the way an individual alters or controls various subsystems of responses under average or no rmal conditions. Emotion dysregulation is characterized by excessive negative affect, irrational thoughts, impulsive emotional behaviours, dissociation under stress, and difficulties in controlling physiological arousal and in undertaking non-mood-dependent activities (Linehan et al., 2007). Various self-destructive behaviours, dysfunctional behaviours, and maladaptive responses are important indicators of emotion dysregulation. Goal Setting in DBT In DBT, goals for treatment are set through collaborative efforts involving both the therapist and the client. However, the therapist plays a major role in modifying the clients’ goals to fit into the treatment model. Here, a treatment target hierarchy is used in determining the problem focus in each treatment session. In most cases, the therapist gives the highest priority to addressing suicidal and self-injurious behaviours among the clients. These behaviours are commonly referred to as out-of-control and usually categorized into life-interfering, life-threatening, and severe quality-of-life-interfering behaviours. Once a client has achieved stability and control over these dysfunctional behaviours, then the therapy focuses on activities aimed stabilizing the clients’ emotions and alleviating trauma. Further, the therapy moves to activities that increase the clients’ self-respect and decreasing problems that cab interfere with the clients’ ability to achieve personal goals. Finally, the therapy proceeds to resolving the clients’ experiences and feelings of incompleteness (Koerner Linehan, 1992; Kliem et al., 2010). Moreover, it is imperative for the therapy to address various secondary targets, such as inhibited behavioural patterns, which may interfere with the progress made in treatment. Processes and Specific Techniques in DBT The main objectives of DBT are to enhance the clients’ capabilities and motivate them accordingly achieved by increasing the clients’ set of skilled behaviors, generalizing the learned behaviours to the natural environment, improving the clients’ motivation, reducing dysfunctional behaviours and eliminating their reinforcements, reinforcing effective behaviors and increasing the therapists’ motivation and capabilities. These functions of DBT are further supported by four major modes of treatment, which include individual psychotherapy, skills training group therapy, consultation teams, and phone coaching. On the other hand, DBT focuses on problem-solving by means of validation and dialectical strategies. Therefore, the treatment involves specific techniques such as change strategies, validation strategies, dialectical strategies, communication strategies, and case management strategies (Neacsiu et al., 2012). Moreover, these broad categories involve more specific processes and techniques of handling borderline clients. There are many treatment techniques borrowed from CBT, such as targeting, attention t o in-session behaviours, chain analysis, opposite action, and observing limits (Linehan, 1993a; Dimeff Linehan, 2001). Applicability DBT is the most widely investigated psychosocial model of treatment for BPD. A myriad of studies have tested the efficacy of DBT in resolving different symptoms associated with BPD in a variety of populations and settings. In studies involving highly suicidal patients, the researchers found out that DBT was effective in resolving a number of dysfunctional and maladaptive behaviours such as suicide including some cases of severe suicidal behaviors and self-jury among the participants as compared to treatment as usual (TAU). Furthermore, it has been shown in these studies that the tendency to use psychiatric facilities was less among BPD clients who had been treated using the DBT approach (Neacsiu et al., 2012; Bedics, Atkins, Comtois, Linehan, 2011; Pistorello et al., 2012; Harned et al., 2009; Iverson, Shenk, Fruzzetti, 2009). On the other hand, the effectiveness of DBT has been investigated in the treatment of other psychosocial issues, and there are promising results, particularly in treating anxiety, substance abuse, eating disorders, ADHD, and treatment-resistant depression (Rizvi, Steffel, Carson-Wong, 2012). References Bedics, J. D., Atkins, D. C., Comtois, K. A., Linehan, M. M., (2012). Treatment differences in the therapeutic relationship and introject during a 2-year randomized controlled trial of dialectical behaviour therapy versus nonbehavioural psychotherapy experts for borderline personality disorder. Journal of Consulting and Clinical Psychology, 80(1), 66-77. Crowell, S. E., Beauchaine, T. P., Linehan, M. M. (2009). A biosocial developmental model of borderline personality: Elaborating and extending Linehan’s theory. Psychological Bulletin, 135, 495-510. Dimeff Linehan, M. M. (2001). Dialectical behaviour therapy in a nutshell. California Psychologist, 34, 10-13. Feigenbaum, J. (2207). Dialectical be haviour therapy: An increasing evidence base. Journal of Mental Health, 16(1), 51-60. Harned, M. S., Chapman, A. L., Dexter-Mazza, E. T., Murray, A., Comtois, K. A., Linehan, M. M. (2009). Treating co-occurring Axis I disorders in recurrently suicidal women with borderline personality disorder: A 2-year randomized trial of dialectical behaviour therapy versus community treatment by experts. Personality Disorders: Theory, Research, and Treatment, 5(1), 35-45. Huss, D. B., Baer, R. A. (2007). Acceptance and change: The integration of mindfulness-based cognitive therapy into ongoing dialectical behaviour therapy in a case of borderline personality disorder with depression. Clinical Case Studies, 6(1), 17-33. Iverson, K. M., Shenk, C., Fruzzetti, A. E. (2009). Dialectical behaviour therapy for women victims of domestic abuse: A pilot study. Professional Psychology: Research and Practice, 40(3), 242-248. Kliem, S., Kroger, C., Kosfelder, J. (2010). Dialectical behaviour therapy for b orderline personality disorder: A meta-analysis using mixed effects modeling. Journal of Consulting and Clinical Psychology, 78(6), 936-951. Koerner, K., Linehan, M. M. (1992). Integrative therapy for borderline personality disorder: Dialectical behaviour therapy. In J. C. Norcross (Ed.). Handbook of Psychotherapy integration (pp. 433-459). New York, NY: Basic Books. Linehan, M. M. (1993a). Cognitive-behavioural treatment of borderline personality disorder. New York, NY: Guilford. Linehan, M. M. (1993b). Skills training manual for treating borderline personality disorder. New York, NY: Guilford. Linehan, M. M., Bohus, M., Lynch, T. (2007). Dialectical behaviour therapy for pervasive emotion dysregulation: Theoretical and practical underpinnings. In J. Gross (Ed.), Handbook of emotion regulation (pp. 581-605). New York, NY: Guilford. Miller, A. L., Rathus, J. H., Linehan, M. M. (2007). Dialectical behaviour therapy with suicidal adolescents. New York, NY: Guilford. Neacsiu, A. D., Ward-Ciesielski, E. F., Linehan, M. M. (2012). Emerging approaches to counseling intervention: Dialectical behaviour therapy. The Counseling Psychologist, 40(7), 1003-1032. Palmer, R. L. (2002). Dialectical behaviour therapy for borderline personality disorder. Advances in Psychiatric Treatment, 8(1), 10-16. Piatorello, J. Fruzzetti, A. E., MacLane, C., Gallop, R., Iverson, K. M. (2012). Dialectical Behaviour therapy (DBT) applied to college students: A randomized clinical trial. Journal of Consulting and Clinical Psychology. Advance Online Publication. Rizvi, S. L., Steffel, L. M., Carson-Wong, A. (2012). An overview of dialectical behavioral therapy for professional psychologists. Professional Psychology: Research and Practice. Advance Online Publication. Robins, C. J., Schmidt, Linehan, M. M. (2004). Dialectical behaviour therapy: Synthesizing radical acceptance with skillful means. In S. C. Hayes, V. M. Follette, M. M. Linehan (Eds.), Mindfulness and acceptance: Expanding t he cognitive-behavioural tradition (pp. 30-44). New York, NY: Guilford. Swales, M. A. (2009). Dialectical behaviour therapy: Description, research and future directions. The International Journal of Behavioral Consultation and Therapy, 5(2), 164-177. Wix, S. (2003). Dialectical behaviour therapy observed. The British Journal of Forensic Practice, 5(2), 3-7.