Anda di halaman 1dari 16

PSYCHOLOGY: UNIT 3

Biorhythms, Aggression, Relationships

By Christopher Martin

Topic I: Bio-rhythms (separated into six essay topics) Biological rhythms: There are three types of biological rhythm; circadian, ultradian and infradian. Circadian rhythms are those which complete a cycle in a 24 hour period such as the sleep-wake cycle, heart rate or metabolism. People have variations within the circadian rhythms, notably the owl/lark division describing people who have biological clocks which run ahead or behind the average. Ultradian rhythms are those which occur more than once in a 24 hour period, for example eating or the sleep cycle which repeats approximately four times per night. Infradian rhythms are those which occur over a period greater than 24 hours, such as the menstrual cycle and PMS. These and SAD (seasonal affective disorder) are the infradian rhythms which have attracted the most psychological research. The stages of sleep are an example of an ultradian rhythm, repeating roughly every 90 minutes while asleep. Since the invention of the EEG in the 1930s the research into stages of sleep has drastically increased. In 1968, Kales and Rechtschaffen discovered four distinct stages people entered during sleep. Stage 1 usually lasts for roughly 15 minutes at the beginning of the cycle and is characterised by slower theta brain waves. Stage 2, lasting about 20 minutes is characterised by sleep spindles (bursts of high cognitive activity) and K-complexes. Following this is stage 3, which lasts for 15 minutes. In this stage, brain waves slow and increase in amplitude and wavelength, developing into delta waves. Stage 4 is similar to stage 3 and is when a person is most relaxed and most difficult to wake. The fifth stage of sleep is called REM (as opposed to stage 1-4 which are NREM stages). During REM sleep the brain is almost as active as it is during the day. Sleep paralysis also occurs, meaning that while brain activity is high, because the pons disconnects the brain from the muscles, the body effectively becomes paralysed. There are a few infradian rhythms which have attracted a fair amount of research. Seasonal affective disorder affects a small number of people. It is a disorder where low light levels stimulate melatonin production (a neuro-chemical which induces sleepiness) and decrease serotonin production (which can lead to depression). Terman et al. (1998) researched 124 participants with SAD; 85 were exposed to a bright light in the morning or evening while others were exposed to negative ions and acted as a placebo group. 60% of the morning light group showed an improvement, as opposed to 30% of the evening light group and only 5% of the placebo group. Therefore, it has been concluded that bright light acts as an exogenous zeitgeber, resetting the biological clock in the morning. The menstrual cycle, lasting roughly a month uses both endogenous pacemakers and exogenous zeitgebers. On an internal level, this is controlled by levels of oestrogen and progesterone, both secreted by the ovaries. These cause the release of eggs from the ovum and the thickening of the uterus lining. One notable external factor is living with other women, which alters the cycle likely due to the secretion of pheromones which carry messages between individuals of the same species. This was investigated by McClintock and Stern (1988) in a 10 year longitudinal study. In this study sweat samples from 9 women were collected and dabbed on the upper lip of a separate 20 with histories of irregular menstrual cycles. 68% of recipients responded to the pheromones. This is supported by Russell et al (1980) who conducted a similar study where four fifths of participants responded to the pheromones. However, McClintock's study has been criticised for a low sample size and Wilson (1992) believes that the results were due to statistical errors and when these are corrected the effect disappears. Reinberg (1967) reported on a woman who lived in a cave for 3 months; her menstrual cycle shortened to 25.7 days. Therefore it may be possible for light levels to affect the cycle. This is supported by Timonen et al. (1964) who found women were less likely to conceive during darker months due to the effect of light on the pituitary gland, which may have an evolutionary advantage. Endogenous pacemakers are internal factors which are able to regulate biological rhythms. To study these, Siffre (1962) spent 61 days in a cave in low light conditions. During this period, his body clock extended to a 24.5 hour day and when he emerged he believed it was 28 days earlier than it in fact was. This suggests that there is internal control of circadian rhythms because a regular cycle was maintained but also there must be exogenous zeitgebers that shorten the cycle to a 24 hour cycle instead of a 24.5 hour cycle. There is conflicting evidence from Czeisler who conducted a study where participants were kept in constant low light conditions. In this study, a roughly 24 hour cycle was maintained; this is known as free running. However, a major criticism of Siffres study is that it was a case study and therefore, because of individual differences, other people may react differently. The basis for endogenous pacemakers is the pineal gland, which secretes melatonin (the neurotransmitter that induces sleepiness). In some animals, this itself has photoreceptors that monitor light levels. However in humans, the SCN (suprachiasmatic nucleus) receives sensory input via the optic nerve and regulates melatonin production. Morgan (1995) transplanted the SCNs from mutant hamsters (with abnormal circadian rhythms) into regular hamsters. It was discovered that this caused the hamsters to develop abnormal circadian rhythms, implicating the SCN in the regulation of circadian rhythms. But there are issues with generalising these results across species and releasing the hamsters into the wild. *****

Disrupting rhythms: Under normal circumstances biological rhythms are not in conflict with the daily lives of people, however there are two main examples of when conflict can occur; jet lag and shift lag. Jet lag (desynchronosis) is caused by the bodys internal clock being out of sync with external cues and has symptoms including fatigue, insomnia, anxiety and dehydration. Schwartz et al. (1995) studied the performance and reaction times of baseball teams flying from the East to West coast of America and vice versa (a 3 hour time difference). It was found that teams travelling West performed considerably better than those travelling East. However, this was performed on sports teams who are likely to be trained to have high reflexes/ reaction times so there are difficulties in generalising the results of this study. De la Iglesia (2004) exposed rats to artificial days lasting 11 hours rather than 12. It was found that gradually, the rats began to exhibit daytime behaviour at night. De la Iglesia went on to discover that both the top and bottom of the SCN contained the protein Perl during the day and the protein Bmall at night. However, during desynchronosis the top half contained Perl and the bottom half contained Bmall suggesting that the bottom half of the SCN continues to rely on endogenous pacemakers whereas the top half is affected by exogenous zeitgebers. Saper (2008) suggested that there is a food clock which is capable of over-riding the master biological clock and therefore fasting during flights and eating at the correct times in new time zones may in fact reset the biological clock. Shift lag is a serious problem as it can result in fatigue, sleep disturbance, lack of concentration, memory loss and usually occurs over extended periods of time. There are a number of shift patterns currently in use; fixed shifts and clockwise/ anti-clockwise rotating shifts. Czeisler et al (1982) recommended a slow rotation with a phase-delay system (moving a shift forward every time) to factory workers in a Utah chemical plant and a number of benefits including increased morale and health were reported. Boivin et al. (1996) put 31 male participants on an inverted sleep pattern for three days. After waking on each day they were subjected to one of the following conditions: very bright light, bright light, ordinary room lighting or dim lighting. To measure the adjustment, core body temperature (a known circadian rhythm) was measured. Participants in the very bright light condition adjusted by five hours within three days. The other conditions did also advance but not by as much and the participants in condition four did not adjust at all. It was concluded that bright light can help biological rhythms adjust to shift lag and this research could be implemented in various companies where employees work on shifts. Alternatively, Sharkey (2001) found that the hormone melatonin could be used to aid adjustment to shift patterns and increase sleep during periods of non-work. However this is currently only available in America as it has not been given an EU licence yet. ***** Sleep states: Sleep appears to be necessary for all animals to survive. It has been estimated that humans sleep on average for 7.5 hours per night. Of course, there are individual differences and Meddis (1979) even reported the case study of a woman who slept for 1 hour per night with none of the side effects associated with sleep deprivation. The stages of sleep (slow wave 1-4 and REM) are covered in biological rhythms. There are two theories concerning the development of sleep, these are the evolutionary theory and the restorative theory. The evolutionary theory (Meddis) states that due to poor vision in low light, sleep has an evolutionary advantage to humans because it keeps the species safe at night; and therefore more likely to survive to pass on genes. This theory also takes into account that animals with higher metabolic rates spend more time eating and so sleep for smaller periods at a time. Evans (1984) criticises this theory, stating the behaviour patterns involved in sleep are glaringly at odds with common sense referring to the fact that while this theory proposes animals sleep for protection, they are in fact at their most vulnerable during this state. Siegal (2005) reviewed the sleep patterns of numerous species and concluded that it could not be performing the same function in every species because of the diversity of sleep patterns. Webb proposes a variation on the evolutionary theory, known as the hibernation theory which sees sleep as an adaptive behaviour designed to conserve energy. This theory compares sleep to hibernation in the way that it occurs to conserve energy so that an animal does not need to constantly feed. In humans, sleep lowers the metabolic rate by up to 10% thus conserving energy/ resources during time when early humans were unable to forage or hunt (eg. night time, as described in the previous paragraph). Meddis criticises this theory for being too simplistic and not taking into account the role of sleep in protection from danger. Empson(1993) described sleep as a complex function involving far reaching changes in brain and body physiology suggesting that sleep must have a restorative function and cannot purely be evolutionary. Evolutionary theories are also unable to explain the complexities of sleep such as the REM stage (although it has been suggested that the brain activity observed in this stage is to prevent brain temperature from dropping too low). Some psychologists argue that sleep would now be pointless in human societies, however the evolutionary response is that behaviour changes much more rapidly than biology or physiology through evolution known as the genome lag. Oswald (1966) suggested that sleep restores energy, removes waste from muscles and repairs cells, as well as allow growth to occur. One example of this is the build up of neurotransmitters used in the nervous system throughout the

day, the restoration theory states that these can be removed and levels restored during sleep. Oswald noticed that greater amounts of the growth hormone is released into the bloodstream during stages 3 and 4 NREM sleep, supporting this theory. However, many of the other processes that occur during sleep, such as protein synthesis also happen while awake. Further support is provided by Shapiro (1981) who studied ultra- marathon runners. It was found that on the two nights after the marathon, participants slept for 90 minutes longer than usual, while REM sleep decreased and stage 4 slow wave sleep increased from 25% to 45%. Further support for why deep sleep occurs in the first half of the night is provided by the fact that amino acids only remain in the bloodstream for eight hours; therefore protein synthesis could only occur during the first half of the night. Hartman (1984) extended this theory to include restoration during REM sleep, however restoration of the brain and not the body. Stern and Morgane (1974) found that levels of neurotransmitters within the brain may be restored during REM sleep. Further support comes from the fact that more time is spent in REM sleep during childhood as opposed to adulthood (when the most brain development occurs). Most research into the purpose of REM sleep comes from sleep deprivation, and will be discussed in the next section. However, other supporting evidence comes from people with brain injuries. It has been found that people with brain damage caused by ECT, strokes etc. spend longer in REM sleep for roughly 6 weeks, suggesting that restoration and repair to the brain is being carried out in this time. Also, patients on MAOI antidepressants spend less time in REM sleep, but when the treatment is stopped, there is no REM rebound. Naturally, it has been suggested that the antidepressants (which are known to increase serotonin and dopamine levels) are providing what REM sleep would otherwise provide. Evidence against this theory includes; Horne and Millard (1985) who found that even after physical exertion, although people fall asleep quicker, they do not sleep for longer. Also, Ryback and Lewis discovered that the amount of sleep required does not decrease when daytime activity decreases. Finally, it is worth noting that Horne (1988) distinguishes between core sleep, comprised of stage 4 slow wave sleep and REM sleep, and non- core sleep, comprising of stages 2 and 3 slow wave sleep. Core sleep is present in all animals but non-core sleep is not, suggesting it is not essential. ***** Sleep deprivation: Peter Tripp is an example of a total sleep deprivation case study after spending 201 hours and 10 minutes awake. When he began to fall asleep he was wakened by doctors and nurses, however after a while, he began to hallucinate and suffer from delusions. A second case study in this area is that of Randy Gardner (1965) who stayed awake for 11 consecutive days. During this time, he reported blurred vision and mild paranoia. When he did sleep, he slept for 14 hours and 40 minutes on the first two nights and longer on the following two. In total, he only reclaimed roughly 11 hours sleep, however he spent much more time in stage 4 NREM and REM sleep compared to usual, suggesting that they are important. Of course, being case studies, the results of these are difficult to generalise to the entire population, but other research has backed up these findings. In severe cases of fatal familial insomnia, a person, upon reaching middle age, stops sleeping until the point of death. Huber- Weidman carried out a meta-analysis of sleep deprivation studies and found that symptoms ranged from 1 night without sleep causing discomfort, to 6 nights without sleep, where the participant experienced a loss of self identity. Another study, this one by Webb and Bonnett (1978) reduced participants sleep to four hours per night and found no negative consequences. However, in all sleep deprivation studies there will be demand characteristics and there will be experimenter bias due to expectations of participants. Partial sleep deprivation studies deprive participants of only one part of a nights sleep; either NREM or REM. Dement (1960) deprived participants of either REM or NREM sleep and observed the consequences. He found that participants deprived of REM sleep found concentrating difficult, became more aggressive and displayed REM rebound. By the 7th night, these participants were attempting to enter REM sleep 26 times, thus deteriorating into total sleep deprivation. Another study is provided by Jouvet (1967) who placed cats on upturned flower pots in water. This meant they could sleep, but once REM sleep was reached and sleep paralysis occurred, they would fall into the water. Many of the cats became classically conditioned and would wake before entering REM sleep. On average, the cats survived for 35 days. Dwyer and Charles concluded that evolutionary theories do not explain why sleep deprivation causes so many adverse effects, however restorative theories do. That said, there are important advantages to evolutionary explanations so an eclectic approach may be the most suitable. As well as this, there are three stages to the physiology of sleep highlighted by sleep deprivation studies. The first is staying awake which is mainly controlled by the RAS (reticular activating system). Support for this is provided by Bremner (1937) who discovered lesioning the brain stem above the RAS in cats caused a permanent coma, however lesioning the brain stem below the RAS caused no disturbance. This is because lesioning above the RAS prevents electrical impulses passing from it to the higher brain centres. The second stage is getting to sleep, which requires the RAS to be switched off; a process completed by the neurochemicals melatonin and serotonin. The mechanism here is

when a lack of light is detected by the eyes, an electrical impulse is passed to the SCN which influences the pineal gland and stimulates the production of melatonin. Melatonin in turn causes serotonin to be produced in the raphe nuclei. The serotonin is then able to switch off the RAS. This is supported by Jouvet (1967) who discovered that damaging the raphe nuclei of cats caused severe insomnia. The third stage is switching from NREM to REM sleep, which is triggered by two processes; the first is the locus coeruleus producing noradrenaline which passes to the higher brain centres. The second is the neurotransmitter acetyl choline entering the pons and activating REM sleep which lasts 15 minutes, and then taking 90 minutes for the process to repeat. ***** Lifespan changes: A brief overview of the pattern of sleep at different ages is as follows. Newborns sleep for 18 hours per night, 9 hours of which is REM sleep. In the first few months, babies can directly enter REM sleep; it is only after this period when the REM/NREM cycle is established. At 1 year the total time asleep drops to 14 hours per day and the ultradian sleep cycle increases to 60 minutes. Between 5 and 10 years, the total sleeping time drops to 10 hours, with 75% NREM and 25% REM sleep. The ultradian cycle increases to 70 minutes. Between 10 and 12 years old is described by Dement (1999) as sleep/wake utopia. During adolescence, the same amount of sleep should be kept, but social and environmental factors often prevent this. Between the ages of 18 and 30 most peoples sleep decreases and many people become sleep deprived. Between 30 and 45 sleep continues to decrease and people tend to feel tired upon waking; the amount of deep sleep (particularly stage 4 sleep) decreases. At the age of 45 and lasting to the age of 60, hormone production decreases and quality of sleep decreases. Total sleep time drops to 7 hours and there is little or no stage 4 sleep, however REM sleep remains constant at 2 hours per night. From the age of 60 onwards, the quality of sleep deteriorates rapidly and Dement estimates that there could be up to 1000 micro-arousals during every nights sleep which have a profound effect on the restorative effect of sleep. Van Cauter et al. (2000) carried out a longitudinal sleep study on 149 male participants over a 14 year period. They found that hormone production decreases between the ages of 16 and 35, then again between the ages of 35 and 50. This means that between these periods the amount of growth and physiological repair carried out during sleep decreases from a restorative perspective. From an evolutionary perspective, it is unlikely for early humans to have lived past 45 and therefore the gradual decrease of hormones was natural because after this point, restoration was no longer needed. This is supported by other studies which suggest that as age increases there is a decrease in total sleep time and notably slow wave sleep, and there is an increase in sleep latency. Kloesh et al. (2006) found that the male sleep pattern is disrupted by sleeping with a partner. The study comprised of 8 unmarried childless couples sleeping together for 10 days and apart for 10 days. Each day they were asked to complete a series of tasks. The study found that co-sleeping raises the levels of stress hormones in men but women spend more time in deep sleep and it is therefore beneficial for them. The methods used for testing sleep pattern differences at different ages are rigorous and include EEG, EMG etc. However, most information is gathered in sleep laboratories which is a highly artificial environment and may in itself affect sleep patterns (low ecological validity). Self reporting is also widely used in sleep studies which again is unreliable and can often result in socially desirable results. There is also dispute over whether older people do in fact sleep much less than younger people. Borberley et al (1981) reported that 60% of over 65 year olds take regular afternoon naps; although there is a wide consensus that older people do have less nocturnal sleep. Of course, while there are many average differences such as this one, individual differences also play a large role in sleep differences at different ages. As well as individual differences, there are also cultural differences. Most research has been into monophasic sleep, however in some cultures afternoon naps or two shorter periods of sleep are more common. This is therefore an example of ethnocentric research. ***** Disorders of sleep: There are a number of sleep disorders and most people will suffer from at least one of these at some point during their life. The most common are: primary and secondary insomnia, somnambulism and narcolepsy. Primary insomnia is said to be an inability to fall or remain asleep due to anything other than a disease process (ie. not psychiatric or environmental causes). Vgontzas et al. (2005) found that insomniacs have increased levels of ACTH and cortisol, which has been associated with higher levels of arousal. This suggests that some insomniacs may be in a state of hyper-arousal. Nofzinger et al. (2004) found that usually in the transition to sleep, brain activity in the thalamus and prefrontal cortex decreases; however this change is smaller in insomniacs. This may help explain why insomniacs experience difficulty falling asleep. Winkelman et al (2008) proposes that insomnia is caused by changes in brain chemistry. This study found that people who had suffered from insomnia for more than 6 months had reduced levels of GABA, which inhibits brain function and thus shows an alternative explanation for insomniacs inability to fall/

remain asleep. A third explanation is genetic- Beaulieu-Bonneau et al. (2007) found that 34.9% of insomniacs surveyed had a first degree relative with insomnia. Further, Watson et al (2006) found a correlation of 0.47 between MZ twins and 0.15 between DZ twins, suggesting that while genetics is not predictive of insomnia, it is an influencing factor. Studies have also implicated personality factors in the onset of primary insomnia. One such study was by Kales et al. (1976) which used the MMPI to test 128 insomniacs and found that insomniacs tended to have an internal locus of control. This original study had a biased sample with no control, therefore the study was repeated with a sample of 300 insomniacs and a control group of 100. This Kales et al. (1983) study found similar results and that insomniacs had the following traits in common: obsessiveness, inhibition of anger and negative self image. Kales concluded that this caused insomniacs to be in a constant state of emotional arousal, contributing to their trouble sleeping. Secondary insomnia is a form of insomnia caused by or made worse by either psychiatric or environmental factors. A range of medical conditions have been shown to produce insomnia as a side effect, including; chronic pains, respiratory diseases and endocrine conditions. Katz et al. (2002) conducted a study on 3,445 patients with a range of these conditions and found that 50% reported (this was conducted using questionnaires) symptoms indicative of insomnia. Bardage and Isacson (2000) found that 20% of patients using drugs to treat hypertension had insomnia like symptoms. Even sleeping drugs can cause insomnia because they can cause dependence and therefore rebound insomnia when they are removed leading to increased dependence. Other drugs that may amplify the effects of insomnia include alcohol and tobacco. Aside from biological causes, secondary insomnia is also associated with mental health problems. Weiss et al. (1962) found that 72% of psychiatric patients reported sleep disturbance, as opposed to 18% of a sample. In fact, insomnia is so closely associated with depression that it is in fact a criterion for diagnosis. Benca and Peterson (2008) suggest that patients with depression may have similar abnormalities in the genes associated with circadian rhythms as insomniacs. The HPA, which produces cortisol, is linked to depression. However, cortisol reaches its lowest levels during the first few hours of sleep, but remains elevated in patients with depression; furthering the potential link. Brain injury is also a common cause of secondary insomnia. Cohen et al. (1992) compared the sleep complaints of 22 hospitalised with 77 discharged patients finding rates of 72.7% and 51.9% respectively. Both of these are much higher than the general population. In a study by Ayalon et al. (2007) between 40% and 60% of patients with brain injury complained of insomnia. However, of this sample of 42, it was found 15 had CRSD (circadian rhythm sleep disorder) which is commonly misdiagnosed as insomnia. However, unlike insomnia, people suffering from CRSD are able to get enough sleep if allowed; it is simply a problem with the timing of circadian rhythms. Narcolepsy is a sleep disorder characterised by excessive daytime sleepiness. In addition to this, patients may also suffer from cataplexy, disturbed sleep, hypnagogic hallucinations and sleep paralysis. Narcolepsy usually begins in the late teens or early 20s, but around 25% of sufferers do not experience the onset until the age of 40. There is little evidence that narcolepsy is caused by brain damage. Scammell et al. (2001) report the case of a 23 year old who acquired narcolepsy due to damage to the hypothalamus after a stroke. Further testing revealed reduced levels of Hcrt, which is a popular theory for explaining narcolepsy. It is thought that it may be caused by a loss of cells that secrete Hcrt. However, Gerashchenko et al (2003) found a correlation between the number of cells lost and the decline in the level of Hcrt. Further support for Hcrt comes from Parkinson's disease (a disease which destroys brain cells). Many people suffering from this complain of symptoms similar to those of narcolepsy. Thannickal et al (2007) conducted post mortems on brains of sufferers of Parkinsons disease and found that up to 62% of Hcrt producing cells had been lost. An alternative explanation for narcolepsy is the genetic one. Nishino and Mignot (1977) found that narcoleptic Dobermans have a genetic mutation affecting Hcrt, however research showed that the defect did not apply to humans. In fact it seems highly unlikely that genetics play a role in the onset of narcolepsy. One further explanation, provided by Overeem et al. (2008) is that because over 90% of narcoleptics carry particular subtypes of a human leukocyte antigen (HLA), then it may be being destroyed as part of an autoimmune response. However this is not yet any evidence for this hypothesis. _______________________________________________________________________________________________

Topic II: Aggression (separated into three essay topics) Social psychological explanations: One explanation of aggression is what is known as individuation - the process by which a person loses self awareness. Le Bons (1895) suggested that when a crowd, when combined with anonymity, suggestibility and contagion, acts as one mind. Deindividuation is characterised by lowered self- evaluation and lowered concerns of others view of the self. This means that the person and group is un-inhibited by personal morales. Zimbardo suggested that uniforms increase the effects of deindividuation because of the anonymity they provide. This is supported by his Stanford prison study; which was stopped after just 6 days because of the levels of abuse that the guards subjected the prisoners to. Another Zimbardo study (1969) studied 4 groups of female students as they were asked to administer electric shocks to other students in a learning exercise. There were two conditions, in the deindividuated conditions participants wore lab coats and hoods and were never referred to by name. In the individuated condition, participants wore ordinary clothes and were introduced to each other beforehand. It was found that the deindividuated group shocked for twice as long. Further support is provided by Watson (1973) who studied warriors from 24 cultures and found that the most aggressive were those that painted their faces. However, not all large anonymous crowds perform aggressive acts, and Postmes and Spears (1998) concluded that there is insufficient evidence to support the deindividuation theory after a meta-analysis of 60 studies. The theory is also highly deterministic, as it states that the presence of a group determines the aggressive behaviour. One of the more prominent social psychological explanations for aggression is social learning theory. This is a theory that was introduced by Bandura (1963). It states that behaviour is influenced by inherent socio- environmental and psychological factors. The 4 basic principles of social learning theory are: attention, retention, repetition and motivation. In essence these steps describe the process of observing the behaviour, remembering it, then copying and gaining the motivation to repeat the behaviour. Bandura believed that role models played an important part in influencing children to either become aggressive or passive in their behaviour. He also stated that a role model of the same gender as the child will have more influence than a role model of the opposite sex. Banduras theory has led to three models of how aggressive behaviour is encouraged or discouraged. The first is vicarious reinforcement; where a child witnesses a role model being rewarded for performing an aggressive act. The second is direct reinforcement; where the child themselves is rewarded or punished for certain behaviours (this is also known as operant conditioning). The third is self- efficacy; where a child reaches a level of self confidence through mastery experiences, social modelling, social persuasion and psychological responses which will allow them to perform a particular behaviour. The most notable study into this is the Bobo doll study conducted by Bandura (1963) who studied 72 children between the ages of 3 and 6. The experimental group was made up of 24 children with an aggressive role model and 24 children with a non-aggressive role model. In each group 12 children had a role model of the opposite gender. All children went through the same procedure; watching their role model and then being presented with the same toys to play with. The results showed that children exposed to an aggressive role model behaved more aggressively towards the Bobo doll. Children with same sex role models imitated behaviour more accurately and levels of aggression were higher in boys. This study has high face validity, as it does go some way to explaining why children from abusive families may develop aggressive personalities later in life. However, the ethics of this study are questionable. While the confidentiality of the participants names was maintained, videos of this research (including the participants) have been made widely available. Also, if social learning theory holds true, then this study may have contributed to the conditioning of aggressive behaviour in the participants. However, as a theory, SLT has also been used to explain other behaviours successfully such as addiction. The theory can also help to explain the influence of the media on aggression, particularly in violent cases such as that of Jamie Bulger (whose killers were reportedly influenced by the film Childs Play 3) or the aggressive influence of video games. It was in fact this which led to the UK wide ban of the video game Manhunt. However, the studies into social learning theory have currently only taken place in Western cultures and it is therefore an example of an ethnocentric theory. Also, the theory has been criticised for being deterministic as it assumes that children will passively imitate prior behaviour without any cognitive processing. Both of these theories (both deindividuation and social learning theory) ignore biological factors and can therefore be considered reductionist. A few other alternative social psychological theories for aggression have been proposed. Runciman (1966) stated that aggressive behaviour is relative to deprivation and Dollard (1939) believed that aggressive behaviour was due to a combination of arousal and built up frustration. Institutional aggression refers to the violent behaviour that exists within and may in fact be a defining feature of certain institutions or groups. Acts of institutional aggression can range from the physical abuse of individuals (such as the abuse inflicted upon inmates of Abu Ghraib prison by American soldiers) to acts designed to destroy a religious, national or racial group. There are a number of models that attempt to explain institutional aggression, in addition to

Zimbardos Stanford prison study which concluded that uniforms and social expectations played a large role in acts of institutional aggression. The importation model, developed by Irwin and Cressey (1962) claims that interpersonal violence in institutions is not a product of the institution itself but rather the characteristics and personalities of the individuals who enter the institutions. Research evidence in support of this theory includes Harer and Steffensmeier (1996) who analysed data from 58 US prisons and found that African-American inmates displayed significantly higher levels of violence but lower rates of alcohol and drug abuse than white inmates. This is reflective of US society in general, suggesting that these inmates have been imported into the institutional environment. Because of the conclusions this draws, this is an example of socially sensitive research. Also, this research is contradicted by De Lisi et al. (2004) who studied 800 male inmates and found no evidence that gang membership prior to imprisonment had any bearing on behaviour within prison, even though this model would predict otherwise. The deprivation model claims that it is the characteristics of the institution that causes aggression rather than the individuals. This model is commonly applied to prisons as it states that it is primarily the experience of imprisonment that causes extreme stress, which in turn leads to violence. Sykes (1958) stated that inmates engage in interpersonal violence because of the loss of freedom, boredom and loneliness they experience. This model is supported by McCorkle et al. who found that overcrowding, lack of meaningful activity and a lack of privacy increased interpersonal violence in prisons. However, the model is challenged by Poole and Regolis (1983) finding that pre-institutional violence was the best predictor of inmate aggression regardless of the features of the institution in juvenile offenders in four observed institutions. Further research contracting this model is provided by Niiman et al. who found that increasing the personal space of inmates failed to decrease the violent behaviour of inmates. A final example of institutional aggression is the violence often displayed during initiation rituals. These are rituals that are mandatory for new members of a group that are supposed to create a common bond between members. The theory behind these rituals is that by having endured the extreme initiation, the new member will feel a part of the select group that has done so. Raphael (1988) stated that the main reason for this type of initiation (known as hazing) is to symbolically take away the weakness of childhood and replace it with adulthood. Examples of this type of ritual outside of institutions can be observed in a number of tribal cultures which require children to pass an initiation ritual to become an adult and/ or a warrior. There have been a number of studies which have found evidence for the use of initiation rituals to establish dominance and hierarchy in new members to institutions. McCorkle (1993) found that in prisons the domination of the weak during these rituals was seen by inmates as essential to maintaining status. Of course, this meant that passive behaviour was seen as a sign of weakness and an indication of an exploitative opportunity. However, these rituals are not clear cut and are not always recognised by victims for what they are. ***** Biological explanations: Neurotransmitters are chemicals which allow electrical impulses to be passed from one brain cell to another. Recently, two neurotransmitters have been implicated in the control of aggressive behaviour. The first of these is serotonin; Cases (1995) stated that, in normal levels, it exhibits an inhibitory effect on neurones. Therefore, low levels of serotonin remove this inhibitory effect and individuals are more prone to impulsive and aggressive responses. Mann et al. (1990) administered dexfenfluramine, a drug known to lower levels of serotonin, to 35 healthy adults. Questionnaires were then used to assess levels of aggression and it was found that in men, aggression levels rose after the drug had been administered. Badaway (2006) found that consumption of alcohol depleted serotonin levels in most people. In susceptible individuals, Badaway concluded that alcohol consumption may lead to aggression via this process. However, it is also possible that aggression is not caused by low serotonin levels but rather a low serotonin metabolism, which in turn leads to a greater number of receptors. A larger number of receptors means that serotonin is used up quicker leading to chronic serotonin depletion. Arora and Meltzer (1989) found a correlational relationship between violent suicide and an increased serotonin receptor density in the prefrontal cortex. Lavine (1997) stated that there is research evidence to support the theory that increased dopamine activity increases aggressive behaviour. Buitelaar (2003) found that the use of dopamine antagonists can be successfully used to reduce aggressive behaviour in violent individuals. Couppis et al (2008) have recently found evidence that dopamine may reinforce rather than cause aggression. This research suggests that some individuals intentionally seek out aggressive encounters because of the rewarding experience, caused by an increase in dopamine levels that these encounters provide. Animal studies conducted on mice highlight the problem with establishing a link between dopamine and aggression. Isolating the neurotransmitter makes it difficult for the mice to move because of the role that dopamine plays in the co-ordination of movement (as well as activating pleasure centres). Therefore it is difficult to monitor any drop in aggressive behaviour because there is a large drop in all behaviour.

Testosterone is an androgen hormone (so called because it produces male characteristics) and rather than cause aggression, it is a hormone which is believed to make aggressive behaviour more likely. Archer (1991) carried out a meta-analysis of five studies into testosterone and aggression and found that there is a low positive correlation between testosterone and aggression. Booth et al. (2006) described this as a correlation which associated testosterone with dominance rather than aggression (although it may be exhibited as a by product). He stated that dominance is a behaviour designed to gain or retain status rather than intend to inflict harm. Contrary to this theory is research conducted by Bain et al. (1987) who found no significant difference between the levels of testosterone in men who had been charged with violent crimes such as murder compared to men who had been charged with nonviolent crimes such as burglary. Zitzmann (2006) suggested that the link between testosterone and aggression may only be relevant in individuals who produce excessively high levels of the hormone, such as strength athletes. And while nearly all research into testosterone is androcentric (focused on men), Archer suggests that the link between aggression and testosterone may in fact be stronger in women. Van Goozen et al. (2007) claim that there is a link between aggression and the hormone cortisol. This relationship is an inverse correlation, that is lower levels of cortisol are associated with higher levels of aggressive behaviour. Virkkunen et al (1985) conducted a study which found low levels of cortisol in habitually violent offenders. Pompa et al. (2006) conducted a study which revealed a significant interaction between cortisol and testosterone in relation to overt aggression. They found a significant positive relationship between testosterone and overt aggression in participants with low cortisol levels but not high cortisol levels (this could also explain inconsistencies in research into aggression). There is also longitudinal evidence for this relationship, for example McBurnett et al. who found that boys with consistently low levels of cortisol had three times the level of displays of aggressive behaviour compared to boys with either high or fluctuating levels of cortisol. Although there are many studies which do support this conclusion, there are also studies such as Gerra et al. (1997) which have reported that higher levels of cortisol in participants correlate to higher levels of aggression. In addition to neurochemical explanations, it has also been suggested that aggression may be affected by genetics. A number of studies have attempted to establish the role of genetic factors in aggressive behaviour. However, research into this area also provides opportunities for early screening and/ or discrimination against people at risk of aggressive behaviour because of their genetics. Most research into genetics is also deterministic because it assumes that aggression will be displayed because of the presence of a certain genetic makeup. One area of study into genetics is MZ/ DZ twin studies. McGuffin and Gottesman (1985) found a concordance rate of 87% for aggressive behaviour in MZ twins and of 72% in DZ twins. This study also highlighted the fact that the family environment, shared between siblings, exerts an important influence on aggressive behaviour. A meta- analysis of 3795 pairs of twins by Mason and Frick (1994) concluded that approximately 50% of the difference between anti-social behaviour and non anti-social behaviour could be attributed to genetic factors. A final twin study is that which was conducted by Coccaro et al. (1997) which analysed data from 182 MZ and 182 DZ twins. The study concluded that genes were responsible for more than 40% of individual differences in aggression and that environmental factors accounted for roughly 50% of individual differences in physical aggression. A second way of studying genetic influences is via adoption studies. If these studies find a greater similarity in levels of aggression between the child and their biological parents, rather than their adoptive parents, then this would indicate that there is an important genetic influence on aggressive behaviour. However, if the opposite is found then it would suggest that environmental factors are more important than biological ones. One of the largest adoption studies was carried out by Hutchings and Mednick (1973) who reviewed 14,000 adopted children in Denmark. The study found a significant positive correlation between the number of convictions for criminal violence among the biological parents and the convictions for criminal violence among their adopted children. Evaluation points for both twin studies and adoption studies include: the Hutchings and Mednick (1973) study used a large sample size but only focussed on male participants meaning that the sample, although large, was not representative of the population as a whole. Button et al (2004) conducted a study on 258 pairs of twins and found that although there were significant genetic influences on aggression, the heritability of this behaviour was significantly higher in girls than it was in boys. However, conducting a MZ/ DZ twin study on this scale is difficult because twins are not as common. Also, another problem with twin studies is that usually both twins share the same environment, making it difficult to distinguish biological influences from environmental ones. Researchers have only recently begun to pinpoint exactly what may be the genetic cause of aggression. Currently, the link between the MAOA gene and (responsible for producing the enzyme monoamine oxidase A) and aggression is being investigated. This is an enzyme which breaks down the neurotransmitters serotonin, dopamine and noradrenaline in the brain once they have carried their respective nerve impulses. Brunner et al. (1993) discovered this link while studying a family in the Netherlands who seemed particularly violent and aggressive. It was found that

the men of this family had abnormally low levels of MAOA in their bodies resulting in extremely high levels of the aforementioned neurotransmitters. Of course this contradicts the research into serotonin and aggression which has found an inverse correlation between the neurotransmitter and aggression. Already, the MAOA gene has already been used in criminal courts to diminish the responsibility of peoples actions in severe cases such as murders. Morley and Hall (2003) also suggest that the genes associated with aggression are not deterministic and only poorly predict aggression. In fact it is now widely believed that the MAOA gene only causes aggression when a person with the defect has also had a violent or abusive upbringing. Caspi et al. (2002) conducted a study to prove or disprove this theory. They found that male children who had been mistreated and possessed a variant of the gene that resulted in an increased expression of MAOA were less likely to display antisocial behaviour. Caspi believed that it was possible that early abuse alters serotonin or that decreases in serotonin affect some types of antisocial behaviour but not others. ***** Aggression as an adaptive response: Daly and Wilson (1988) claimed that men, in particular, have a number of mate retention strategies to deter their mate from either leaving them or committing adultery. From an evolutionary perspective, mate retention is important as it greatly increases the chances of conceiving and thus passing on genes to the next generation. These strategies range from vigilance to violence. One such strategy is known as direct guarding and involves males attempting to restrict their partners sexual autonomy and in turn prevent other males from gaining access to their mate. Direct guarding often involves prohibiting their partners from speaking to or interacting with other men. This also, coincidentally, helps prevent cuckoldry, which is also important from an evolutionary perspective. Cuckoldry occurs when a woman deceives her male partner into investing resources into offspring that was conceived with another man. Platek and Shackelford (2006) stated that the risks of cuckoldry were higher for men than women because men can lose both invested resources and reproductive opportunity. Sexual jealousy therefore can be seen as a measure to prevent the female from mating with other males - meaning that it is an adaptive response. The cuckoldry risk hypothesis developed by Camilleri (2004) predicts that males are more likely to use sexually coercive tactics such as partner rape when the risk of cuckoldry is higher. Buss and Shackelford (2004) studied mate retention tactics in married couples. They studied 214 individuals and found that men reported a higher use of debasement techniques and intra-sexual threats. Women on the other hand reported a greater use of verbal possession signals and threats of punishing infidelity. They also found that men married to younger women reported greater devotion to mate retention techniques, including commitment manipulation and violence against rival males. The claim that sexual jealousy is a major cause of violence against women is supported by the Dobash and Dobash (1984) study which found that victims frequently cited sexual jealousy on the part of the male partners as the major cause for violence against them. The use of direct guarding as a mate retention tactic is also made evident in a study by Wilson et al. (1995) who found that among women who had reported the use of this tactic by their male partners, 72% had required medical attention following an assault by their male partner. Camilleri (2004) found that the risk of a partners infidelity predicted the likelihood of sexual coercion in men but not in women. However, most studies (including this one) into mate retention, focus only on male violence against women (and are therefore androcentric) but women also engage in mate retention strategies and violence against their partner. Despite these flaws, this research has led to the discovery that the use of mate retention strategies by males can be used as an early indicator of violence against the female partner. Therefore, identifying the use of mate retention tactics can be used to prevent violence in relationships. Buss and Duntley (2006) propose that humans possess adaptations that have evolved through natural selection to what has now become known as murder. The activation of these adaptations are determined by a number of factors: the degree of genetic relatedness between killer and victim, the relative status between killer and victim, the sex of killer and victim and the size/ strength of both the killer and victims allies. In most circumstances the extremely high costs of committing murder would have outweighed the benefits. However, Buss and Duntley (2006) claim that for early humans, murder was functional in solving adaptive problems such as preventing harm, reputation management and protecting resources. Daly and Wilson (1988) noted that men are more likely to kill other men who they perceive to be sexual rivals; whereas women are more likely to kill in self defence. The murder as an adaptive response hypothesis is supported by comparative studies of other species, including Ghiglieri (1999) who observed male lions and cheetahs killing the offspring of rival males. Fossey (1984) documented the same behaviour in mountain gorillas, which are evolutionarily a lot more similar to humans. The evolved goal hypothesis argues that humans have developed motivations for specific goals, which were, among early humans,

associated with greater reproductive success. Hardy (1991) states that early humans calculated the costs of different methods of achieving these goals and they may, on occasion, conclude that murder was the best solution. Another area which has attracted a significant amount of research is group behaviour. There is a commonly held belief that people behave differently within a group than they do as individuals. Group behaviour provides advantages to the individuals within the group and it is thought that it may provide an adaptive advantage to those involved. One popular example of group displays are lynch mobs. Between 1882 and 1930 there were 2,805 documented victims of lynch mobs in the Southern states of the US. Among the explanations for lynch mobs and other group displays are; the dehumanisation of the victim and the power threat hypothesis. This hypothesis, proposed by Blalock (1967) suggests that as minority group membership grows, members of the majority will intensify their efforts to maintain dominance. According to Blalock, the idea of the power-threat represents the fear of political power in the hands of the minority. This theory seems to explain lynch mob behaviour - some of the reasons given by Tolnay and Beck for black lynchings were trying to vote and voting for the wrong party. Blalock suggests that as minority group membership increases, so does the majority groups discriminatory behaviour. Ridley (1967) suggests that group displays of discrimination against outsiders are more likely when the group feels at risk. Hyatt (1999) argues that through the way of hysterical desecration of the body in black lynchings and other forms of ritual killings, was an attempt to reduce the body to a point where it was unrecognisable as a human being. Tolnay and Beck (1995) suggested that years of propaganda had reduced black people to over simplistic and animalistic stereotypes. These stereotypes further dehumanised the victim and also conditioned lynch mobs into believing that they were defending their community from threats. Clark (2006) studied murders by lynch mobs in San Paolo and found evidence that contradicted the power-threat hypothesis. Afro-Brazilians, although the main victims of lynch mobs, were not considered to pose any threat to the dominant community and therefore a fear of the minority could not be the determining factor in these killings. Rothenberg (1998) observed lynch mob violence in Guatemala. While most cases he observed were in response were in response to serious crimes, others were in response to petty crime, such as poultry theft. Consistent with the idea of dehumanisation, already dead victims were doused in gasoline and burned as if the crowd was seeking further degradation of the victim. Religious displays are another example of group displays. During some religious rituals, self harm is not uncommon, for example self flagellation during the Shia Muslim festival of Ashura. During this festival, some Shia men symbolically recreate the suffering of Hussein by flagellating themselves with knives and chains or cutting their foreheads until blood flows from their bodies. Extreme examples of group behaviour such as this seem to contradict the theory of natural selection and do not appear to be an adaptive response. However, an adaptive response is one which solves an evolutionary problem, therefore one possible explanation for these rituals is to achieve co-operation between group members while discouraging outsiders who wish to gain the benefits of group membership without any of the commitments from joining. People who wish to join a group to gain the benefits but do not wish to share the commitments are known as free riders. These are a problem for any group, and religious groups such as Shia Muslims prevent this with violent or aggressive religious displays which deter those who wish to join a group for selfish reasons from doing so. Zahavi (1975) refers to these costly signalling rituals as handicaps, ie. reliable indicators of status and breeding potential as they are too costly to be performed by low quality individuals. This theory is known as costly signalling theory and can be applied to other groups as well as religious ones, as well as explaining certain evolutionary traits. However, Sosis and Bressler found that religious groups imposed twice as many costly commitments on members than nonreligious groups. This could be used as evidence to show that groups that require the greatest displays of commitment from their members, produce the most long lasting commitment and therefore last the longest. Therefore the painful rituals endured by individuals have an adaptive value to the group. Chen believed that the costs of religious groups should be related to the incentive of group membership. He observed that during the Indonesian financial crisis of the 1990s, Muslim families devoted a greater proportion of their wealth to religious institutions, suggesting that these groups provide social insurance in times of need.

Topic III: Relationships (separated into three essay topics) Formation, maintenance and breakdown: The first distinct stage of a relationship is its formation, there are three psychological theories proposed to explain relationship formation. One of these theories is the reward/ need satisfaction theory developed by Byrne and Clore (1970). RNS theory proposes that people are attracted to people that they find being with rewarding because they fulfil a specific need eg. company, financial security, sex etc. Argyle (1992) stated that individuals form relationships with people whose presence is directly associated with reinforcement. For example, positive emotions provide positive reinforcement and comfort/ support during times of hardship provides negative reinforcement. Of course, classical conditioning also occurs and people associate the person (neutral stimulus) with the positive mood (unconditioned stimulus) that they display. This theory is supported by May and Hamilton (1980) who asked 30 female undergraduates to make judgements on photographs of attractive and unattractive males while listening to pleasant, unpleasant or no music. It was found that the attractive males were evaluated the most positively and that more positive judgements were made during the pleasant music condition. However, this lacks ecological and mundane realism. Also, Hayes (1985) stated that RNS theory only explores the rewards people gain from relationships, however many rewards are also gained from giving as well as receiving these benefits. Lott (1994) found that in many cultures women are more focused on meeting the needs of others than reinforcement. However RNS theory does explain why women may stay in abusive relationships as it fulfills the need of affiliation and stability. Another theory for the formation of relationships is the matching hypothesis and social desirability. The matching hypothesis was proposed by Walster et al. (1966) and states that the more socially desirable a person is in terms of attractiveness, social standing etc. the more desirable they would expect a partner to be. The theory also states that couples who are matched (ie. where both partners are equally desirable) are more likely to have enduring relationships than mismatched couples. Murstein (1972) argues that individuals initial attraction during the formative stages of a relationship depends on the available cues that indicate social desirability. Walster et al. (1966) conducted a study where 752 undergraduates were invited to a get acquainted dance where they believed that they had been matched (on social desirability) with their partners, when in fact the process was randomised. The success of these matches were assessed by follow up questionnaires six months later. It was found that participants reacted more positively to their matches the more attractive they were, regardless of their own level of physical attractiveness. Other factors such as intelligence or personality did not affect subsequent attempts to follow up on matches. However, Murnstein (1972) conducted a correlational study on actual couples and found evidence to support the matching hypothesis. However a major problem with all studies into the matching hypothesis is that the concept of physical attractiveness is poorly operationalised. Also, the matching hypothesis has become associated with only physical attractiveness, therefore the idea that a person can compensate for a lack of physical attractiveness via wealth, personality etc. has become known as complex matching, a term suggested by Hatfield and Sprecher (2009). The filter/ stage model is a theory proposed by Kerchoff and Davis (1962) which states that relationships develop through different filters and factors which are most relevant at different times. Potential partners are filtered so that the available choice is narrowed down to a relatively small amount of desirables. The first filter is the similarity of demographics, ethnicity, social class, educational background, religion, physical attractiveness etc. The second filter is the similarity of attitudes and beliefs and ease of communication. The third filter is the complement of emotional needs. Kerchoff and Davis (1962) completed a longitudinal study of student couples together for more than 18 months. The participants were asked to report the similarity of attitude and personality in their partners. It was found that similarities in attitude were the most important factor for the first 18 months, after which psychological and emotional needs became more important. Saegert (1973) found that women in the company of other women at a wine tasted event preferred these people over those they had not spent time with, supporting the frequency of interaction model. However, this result cannot be generalised to dyadic relationships. It has also been suggested that older adults filter relationships quicker and that social factors also determine the longevity of a relationship; not just its formation. The next stage of a relationship is maintenance, for which there are two main theories. The first of these is the social exchange theory developed by Thaibaut and Kelley (1959). This states that a relationship is a series of social exchanges between individuals whereby each person attempts to maximise their rewards and minimise their costs. This has led to a four stage model of exchanges; sampling, bargaining, commitment and institutionalisation. There are also two reference levels: the comparison level (based on previous experiences of the relationship) and the comparison level for alternatives (based on the possible benefits of other relationships). Rusbult and Zembrodt (1983) conducted a longitudinal study on 30 students in heterosexual relationships. They were asked to fill out a questionnaire every 17 days for 7 months weighing up the rewards and costs involved in their relationship. For those who maintained the

relationship, increase in rewards led to increased satisfaction and increases in costs had little impact. In those that ended their relationship, it was found that this was mainly due to the existence of an attractive alternative. Equity theory, proposed by Walster et al. (1978) offers an explanation into how social exchange works in dyadic relationships. It assumes that people seek equity in relationships rather than benefits, which is assumed by social exchange theory. People who contribute greatly to a relationship, but receive little benefit would perceive inequity and vice versa. Walster provides the four principles of equity theory; people try to maximize their reward in a relationship, the distribution of rewards is negotiated to ensure equity, inequitable relationships produce dissatisfaction, however so long is there a chance of restoring equity individuals will endeavour to restore the relationship. Clark and Mills (1979) identified two types of couple: the communal couple and the exchange couple. The communal couple have few concerns over equity and believe that eventually costs and rewards will eventually balance out in a relationship. The exchange couple have greater concern over rewards and costs as predicted by the social exchange theory. Prins et al. (1993) interviewed male and female partners and found huge gender differences. They found women reported having significantly more extra-marital affairs in response to perceived inequality in a relationship than men. Rollie and Duck (2002) provide a model of the breakdown of a relationship. This model follows the following stages: breakdown - where one partner becomes increasingly dissatisfied with the relationship, intrapsychic processes - social withdrawal and resentment, dyadic processes - when partners talk about perceived inequities (it is this stage which may result in the reconciliation of a relationship), social processes - advice is sought from outside the relationship possibly including the denigration of the partner or scapegoating (social implications are discussed here), gravedressing processes - attempts to organise post-relationship lives begin, resurrection processes - how each partner prepares themselves for relationships afterwards (including creating social value and evaluating past relationships). There are some gender differences highlighted by this model as women report more post relationship growth than men, which could be due to greater social support, however research does not suggest that greater social support leads to greater post relationship growth. Akert (1998) found that the role people played in the decision to end the relationship was the most powerful predictor of the dissolution experience. It was found that the partner that did not initiate the break up was the most miserable in the weeks after the relationship ended. However, note heterosexual bias in all of these studies. Evidence suggests that the breakdown of relationships is not a modern phenomenon. Among the Ache tribe of Paraguay, Hill and Hurtado (1996) found that a typical adult experiences 12 marriages and 12 break ups by the age of 40. Buss (1989) stated that women more than men prefer mates with resources or the potential to gain resources. However, this relies on the partners willingness to share these resources, which can be gauged by the level of emotional investment, therefore women have greater risks associated with the loss of emotional investment. But, this increased emotional investment can be exploited by men who feel threatened by the breakdown of a dyadic relationship ie. signalling an increase in emotional investment to maintain sexual access. Buss and Schmitt (1993) state that according to evolutionary theory males have evolved a desire for sexual variety, and infidelity may be an adaptive response aiding the rejector in finding a replacement mate quickly. ***** Human reproductive behaviour: These behaviours have their origins in the evolutionary past of humanity and conveyed survival or reproductive advantages in early humans. The evolutionary basis for physical attraction states that individuals with attractive faces are preferred as mates because of the potential benefits of passing these characteristics onto their offspring. Thornhill and Gangestad (1994) found that women have clear facial preferences such as a large jaw and prominent cheekbones which are a result of male sex hormones and are therefore produced by the most suitable partners. Langlois et al. conducted a meta-analysis of 919 studies into physical attractiveness. The meta-analysis found that there was considerable agreement between cultures in judging levels of attractiveness, it also found that there is a preference for attractive faces from as young as 26 months old. The nature of sexual selection has led to two types; intrasexual selection and intersexual selection. In humans, males must compete with other males (intrasexual) for sexual access to females who, due to this competition choose their partner (intersexual) to ensure they have made the correct choice for their offspring. Buss (1989) investigated whether there are universal preferences for selecting mates. The study spanned 37 cultures in six different continents. Of course, sampling methods varied for each country and over 10,000 individuals of different religions, ethnicities and socioeconomic statuses were evaluated. The study concluded that women of all cultures preferred men with resources or the potential to gain resources, men placed a greater emphasis on physical attractiveness as an indication of health/ fertility, men preferred partners who were younger than them (again,

perhaps an indication of fertility) and some characteristics were desired by both sexes ie. intelligence, kindliness and dependability. This study has suggested that males prefer mates who are well into adulthood, however research by Anderson et al. (1986) has shown females peak in fertility earlier than previously thought, suggesting that males value fertility in a mate more than reproductive value. A study on a scale this large relies on representative sampling, however, rural and poorly educated individuals were under-represented. Another problem with this study is that social norms differed in many of the cultures it studied, including those where arranged marriages are most common, which appears to contradict evolutionary explanations. There are a number of gender differences that sexual selection theory entails. One of these is the short term mating preferences. This dictates that the more females that a male is able to impregnate the greater their chance of reproductive success. However, the costs of these short term solutions are greater for females because of the risk of mating with poor quality males, which could lead to poor quality offspring. Clark and Hatfield (1989) conducted a study where attractive male and female experimenters approached university students of the opposite sex and asked, among other requests, whether they would have sex with them. None of the female participants agreed, however 75% of the male participants did agree. However, after these encounters, there is a tendency for males to show decreased interest in their casual partners, which Buss and Schmitt (1993) argue is an evolved adaptation which ensures they do not spend too much time with one partner. There are also differences in sexual jealousy; Buss (1993) asked male and female participants to imagine their partners having sex with and falling in love with another person. Men reported distress at sexual infidelity, while women reported distress at emotional infidelity. However, there were demand characteristics and relative levels of emotional stress present in this study. Another consequence of sexual selection is sperm competition which argues that in species where females mate with more than one partner, males have developed larger testicles to increase their chances of reproductive success as they are competing for fertilisation rather than females. Baker and Bellis (1995) therefore argue that early human females were moderately promiscuous, based on the relative size of human testicles. Harvey and May (1989) suggest that ethnic differences may represent adaptive differences in mating strategies in ancestral populations because autopsies have shown Chinese testicles to be roughly half the size of Danish testicles despite only small body size differences. Also, most studies into sexual selection have focussed on preferences rather than real life choices. However, a study into marriages by Buss (1989) confirmed many of the sexual selection predictions. Penton-Voak et al. (1999) found that womens preferences (particularly facial ones) are not static but change according to their position in the menstrual cycle. It was found that women preferred more masculine faces during the most fertile point of the cycle and feminine faces during the least fertile point. Parental investment theory proposed by Trivers (1972) refers to the amount of time, energy, resources etc. invested into an individual offspring to increase the chance of survival. It has been suggested that female investment is greater because female gametes are more costly to produce (in terms of time) than male gametes. After conception the female partner must carry the foetus for a full term, give birth and then continue to breastfeed for up to a further two years. Evolutionary theory also states that to compensate for growing skull size in infants, childbirth occurs earlier in humans than in other mammals, increasing the amount of time mothers must invest in post-natal care. Males, on the other hand, do not have the same amount of certainty that the child is in fact their own, therefore Daly and Wilson (1978) argue that males devote comparatively little time to post-natal child care. The risk for men is investing resources in offspring that is not theirs and will not continue their genetic lineage. Buss (1995) states that sexual jealousy may have evolved as an adaptive solution to this. This theory explains female mate preferences ie. finding a partner that will invest in their offspring and why males favour shorter relationships while females prefer long relationships. However it does not explain Buss and Schmitts finding that lesbian couples prefer to only practise committed sex, despite this being non- reproductive. Geher et al (2007) conducted a study where 91 undergraduates were asked were asked to complete a parental investment perception scale which measured how prepared they perceived themselves to be for parenting. In this condition, no difference was found between male and female perceptions of readiness. Participants were then exposed to a series of scenarios in which parenting could be perceived as costly, while their ANS arousal levels were measured. It was found that men showed consistently higher AND arousal than females, suggesting that males are less prepared to confront costly parenting issues than females. The differences between perceived and actual readiness can be explained by the social desirability bias caused by self report questionnaires on issues of parenting. Another study is that conducted by Norman and Kendrick to see if there were differences between men and women in short term relationship criteria. Participants were asked to design a short term partner however could only choose a certain number of attributes from a pre-defined list. The study found that neither men were interested in physical attractiveness for in a short term partner. It was also found that men seek similar qualities in short term and long term

partners ie. fertility and youth. However, similar to the Geher (2007) study, it too used self report questionnaires which may have led to socially desirable results Kin selection theory, proposed by Hamilton (1964) suggests that individuals are more likely to act altruistically towards genetic relatives. However, this does not explain family conflict. Workman and Reader (2008) suggest that these conflicts can be explained through parental investment theory. Weaning conflict arises because when a child reaches weaning age, the mother is ready to breed again. Unfortunately, the mother reaches this point (where it is more beneficial to breed again and devote resources to the new child) before the original child reaches an age where it is beneficial for the mother to withdraw parental investment. This leads to sibling rivalry and competition for access to parental resources between offspring. Trivers (1974) argues that children use psychological manipulation such as tantrums to gain parental investment. Horr (1977) observed this behaviour in orangutans where young offspring would often whine while being weaned. Daly and Wilson (1988) also describe strategies such as regression in which children exaggerate their dependency to gain access to parental resources. Fouts et al. (2005) tested this hypothesis on farmers and foragers in Northern Congo. 22 forager children and 21 farmer children between the ages of 18 and 59 months were studied. Researchers recorded behaviours such as emotional state, attachment behaviour and care giving responses. The children of farmers showed high levels of distress at weaning while children of the foragers did not. Children of the farmers were weaned earlier and more abruptly, which may explain the exhibited distress while breastfeeding gradually declined for the children of the foragers. Flinn (1989) extended parental investment to inclusive fitness, which states that it is more beneficial for mothers to have another child than a grandchild, which somewhat explains mother/ daughter conflict at puberty. It is also unproductive for females to have children while their mothers are still fertile - maternal suppression. ***** Effects of early experience and culture: Bowlby (1982) proposed that individuals develop an internal working model of the self in relationship to the primary attachment figure based on early experiences with the mother. This model is thought to explain how reliable and available the child is likely to be and, according to Sroufe and Fleeson (1986) how disappointment and emotional discomfort are likely to be handled. This model states that adult relationships are likely to reflect early attachment styles ie. secure or insecure. Hazan and Shaver (1987) published a love quiz in American newspapers to collect information on peoples early attachment experiences and adult romantic experiences. It was found that people with secure attachments as infants were more likely to have happy, lasting relationships while those who had insecure attachments as infants were more likely to be divorced and found relationships more difficult. Feeney et al. (1994) conducted a meta-analysis of studies investigating communication and satisfaction in relationships and found that the secure attachment was associated with greater and more responsive self-disclosure and more supportive interpersonal interaction. The study concluded that anxiety about attachment was the driving force behind destructive patterns in relationships. Qualter and Munn (2005) found that children learn about themselves through experiences with other children. These experiences are internalised to form expectations about future relationships. These expectations then influence how people approach adult relationships. Nangle et al. (2003) state the importance of friendship in this process, stating that having a close friend to confide in creates feelings of acceptance, which is an important factor in future relationships. A study into attachment is provided by Morrison et al (1997) who asked 151 male and 217 female college students to complete questionnaires about their most recent intimate relationship. They were also assessed on attachment style. Students with avoidant attachments reported more hostility in their relationships than students with a secure style. Those with a secure style also reported more interdependence. However, some children can develop reactive attachment disorder due to neglect during infancy. Children who develop this tend to be indiscriminate in their relationships, which can affect intimate adult relationships. Parental relationships between adolescents and parents are different to early relationships because the child can now differentiate between different types of relationship. According to Erikson (1968) relationships in adolescence work through a number of intimacy issues. The most important time for this is middle to late adolescence. And, according to social learning theory, parents may transmit ideas of opposite sex relationships to their children through modelling. Gray and Steinberg (1999) conclude that adolescents raised in an environment where their parents are emotionally receptive are better prepared for intimate relationships than those that are not. In post divorce relationships, Beal and Hochman (1991) state that mother-daughter relationships are often based on allegiances against the male of the pre-divorce relationship. Koerner et al. (2002) found that most mothers disclosed sensitive information to their teenage daughters on a number of divorce related topics in the first 2 years after a divorce. This can lead to the parentification of teenage girls and contribute significantly to a poor adjustment into adulthood. Moeller and Stattin

(2001) found that teenage boys who shared a trustful relationship with their father in a post-divorce relationship felt greater satisfaction with their romantic partners in adulthood; the same was not found for mother-daughter relationships. Frey and Rothlisberger (1996) found that adolescents had twice as many relationships with peers than with family. It is argued that friendships become more important in adolescence because they perform additional psychological tasks such as establishing autonomy, independence etc. Blos (1967) stated that peers provide a way-station to achieving individuation because they prevent feelings of loneliness without any long term commitments. Erikson (1968) claimed that romantic relationships were only possible if the individual had resolved the identity crises by exploring different identities, testing their ability to form intimate relationships and relinquishing their psychological dependence on their parents. Madsen (2001) investigated the effects of adolescent dating behaviour on early adult relationships. It was found that individuals who engaged in low or moderate levels of dating between 15 and 17 experienced more satisfying adult relationships than individuals described as heavy daters. While most research shows positive effects of adolescent relationships, Haynie (2003) found that there was a significant association between romantic involvement and deviant behaviour. Neeman et al. (1995) found that adolescent romantic involvement led to a decrease in academic achievement. To categorise different cultures, Hofstede (1980) analysed data on the work experiences of 100,000 employees in 50 countries. He was then able to classify the different cultures on a continuum from individualist to collectivist. In Western cultures, people have much greater social and geographical mobility meaning that they are able to interact with a large variety of people. This means that there is greater choice of potential partners as opposed to nonWestern cultures where people do not interact with as many people and therefore there is less choice of suitable partners. Undry (1974) stated that in collectivist cultures the primary basis for marriage was family alliances, economic status etc. Qureshi (1991) identified the three types of arranged marriage: planned (via parents), chaperoned interaction and joint-venture (via parents and children). Jankowiak and Fischer (1992) found evidence of romantic love in most of the 166 hunter-gatherer societies that they studied. Therefore, it has been concluded that romantic love is a distinct emotional system present in all humans, regardless of cultural background. Allgeiert and Wiederman (1991) asked college students if they would marry a person who had all of the qualities they desired but did not love. Only 14% of males and 9% of females would; suggesting that in Western cultures, love is a prerequisite for marriage. Levine et al. (1995) found that this was true across individualist cultures, but not collectivist ones. Yum and Hara (2005) studied the role of self-disclosure in relationship development on the internet in Korea, Japan and the US. The research found that self-disclosure was associated with online relationship development, but only in the US was this directly related to increased levels of trust. Ma (1996) studied internet relationships in East Asian and US college students and found that both groups tended towards more rapid self-disclosure on the internet than faceto-face. However the US students did not perceive the East Asian students to be as self- disclosing and themselves. Another aspect of relationships that has attracted cross-cultural research is divorce. Betzig (1989) studied divorce in 160 countries and found the most common reasons for divorce were: infidelity, sterility and maltreatment of one partner. Of late, divorce rates in many Asian countries have begun to rise, which Huang (2005) proposes a number of reasons for. Those reasons are: the urbanisation and changing cultural norms of Asian societies, enhanced choice due to the education/ employment of Asian women, the loosening of social control over marriage, increased leniency of divorce laws and the growth of individualism. This last point shows the cultural shift from a collectivist culture to an individualist one. Simmel (1971) states that individualism is associated with higher divorce rates because it encourages individuals to seek their ideal partner, rather than commit to their current one. To conclude are some evaluative points of research into Western and non-Western relationships. Recently, Hofstedes collectivist/ individualist distinction has been challenged. For example, Li et al. (2006) compared attitudes towards relationships in Canada, China and India did not show differences between Canadian and Chinese cultures but it did show differences between Indian and Chinese cultures. However, this could be due to the Chinese shift towards individualism described above. Many non-Western societies within Western countries are gradually becoming exposed to Western ideals. This is supported by a study by Goodwin et al. (1997) who found that out of 70 Hindu Gujarati couples living in Leicester, only 8% had arranged marriages. Myers et al. (2005) asked 45 individuals in India who had arranged marriages to complete a questionnaire measuring marital satisfaction and what they believed was important for a successful marriage. This was then compared to results gathered from choice marriages in the US. It was found that there were no differences in marital satisfaction, although there were a few differences in what was considered important for a successful marriage. ________________________________________________________________________________________________

Anda mungkin juga menyukai