Jump to content

User:Eyoungstrom/sandbox

From Wikipedia, the free encyclopedia

LOTR edits

[edit]

Several areas in or close by to Glenorchy were settings for scenes in the Lord of the Rings, including some in Lothlorien, Isengard (along the Routeburn Track), and Amon Hen.[1]

Local companies offer themed treks, horseback, or boat rides to see the sites.

Thanksgiving Experiments

[edit]
Eric Youngstrom has given you a *second* Turkey! Turkeys promote WikiLove and hopefully this has made your day yet another notch better -- a second turkey makes sure you have leftovers to extend the holiday and share with friends!

[Here] are some slides that I will post on the Wikimedia Usergroup tomorrow (it's late here, and I am full of turkey!). Please also check your email for some things that I am still in negotiations to be able to upload to Commons (or I will just make a new recording -- the slides are already CC BY 4.0). Happy Thanksgiving! Eyoungstrom Talk Spread the goodness of turkey by adding {{subst:Thanksgiving Turkey}} to their talk page with a friendly message.

Playing with infobox

[edit]
Helping Give Away Psychological Science
Type of site
501(c)3
URLhgaps.org
CommercialNo
Launched2015; 9 years ago (2015)
Current statusActive
HGAPS.org
Helping Give Away Psychological Science
AbbreviationHGAPS
NicknameHGAPS
Pronunciation
  • as two syllables: "H-Gaps"
Formation2016
FounderMian-Li Ong PhD and Eric Arden Youngstrom PhD
Founded atChapel Hill, NC, USA
TypeCorporation that normally receives no more than one-third of its support from gross investment income and unrelated business income and at the same time more than one-third of its support from contributions, fees, and gross receipts related to exempt purposes.
822966807
Registration no.501(c)(3) - A religious, educational, charitable, scientific or literary organization.
Legal statusActive
PurposeMental Health Association, Multipurpose
Websitewww.hgaps.org

Template for writing articles about a psychological measure

[edit]

This section is NOT included in the actual page. It is an overview of what is generally included in a page.

  • Versions, if more than one kind or variant of the test or procedure exists
  • Psychometrics, including validity and reliability of test results
  • History of the test
  • Use in other populations, such as other cultures and countries
  • Research
  • Limitations

Lead section

[edit]

The lead section gives a quick summary of what the assessment is. Here are some pointers (please do not use bullet points when writing article):

  • What are its acronyms?
  • What is its purpose?
  • What population is it intended for? What do the items measure?
  • How long does it take to administer?
  • Who (individual or groups) was it created by?
  • How many questions are inside? Is it multiple choice?
  • What has been its impact on the clinical world in general?
  • Who uses it? Clinicians? Researchers? What settings?


Reliability

[edit]

The rubrics for evaluating reliability and validity are now on published pages in Wikiversity. You will evaluate the instrument based on these rubrics. Then, you will delete the code for the rubric and complete the table (located after the rubrics). Don't forget to adjust the headings once you copy/paste the table in!

An example using the table from the General Behavior Inventory is attached below.

Rubric tables

[edit]

Reliability

[edit]

Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.

Rubric for evaluating norms and reliability for the General Behavior Inventory (table from Youngstrom et al., extending Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Norms Adequate Multiple convenience samples and research studies, including both clinical and nonclinical samples[citation needed]
Internal consistency (Cronbach’s alpha, split half, etc.) Excellent; too good for some contexts Alphas routinely over .94 for both scales, suggesting that scales could be shortened for many uses[citation needed]
Inter-rater reliability Not applicable Designed originally as a self-report scale; parent and youth report correlate about the same as cross-informant scores correlate in general[2]
Test-retest reliability (stability Good r = .73 over 15 weeks. Evaluated in initial studies,[3] with data also show high stability in clinical trials[citation needed]
Repeatability Not published No published studies formally checking repeatability

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Validity

[edit]

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.

Evaluation of validity and utility for the General Behavior Inventory (table from Youngstrom et al., unpublished, extended from Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Content validity Excellent Covers both DSM diagnostic symptoms and a range of associated features[3]
Construct validity (e.g., predictive, concurrent, convergent, and discriminant validity) Excellent Shows convergent validity with other symptom scales, longitudinal prediction of development of mood disorders,[4][5][6] criterion validity via metabolic markers[3][7] and associations with family history of mood disorder.[8] Factor structure complicated;[3][9] the inclusion of “biphasic” or “mixed” mood items creates a lot of cross-loading
Discriminative validity Excellent Multiple studies show that GBI scores discriminate cases with unipolar and bipolar mood disorders from other clinical disorders[3][10][11] effect sizes are among the largest of existing scales[12]
Validity generalization Good Used both as self-report and caregiver report; used in college student[9][13] as well as outpatient[10][14][15] and inpatient clinical samples; translated into multiple languages with good reliability
Treatment sensitivity Good Multiple studies show sensitivity to treatment effects comparable to using interviews by trained raters, including placebo-controlled, masked assignment trials[16][17] Short forms appear to retain sensitivity to treatment effects while substantially reducing burden[17][18]
Clinical utility Good Free (public domain), strong psychometrics, extensive research base. Biggest concerns are length and reading level. Short forms have less research, but are appealing based on reduced burden and promising data

Development and history

[edit]
  • Why was this instrument developed? Why was there a need to do so? What need did it meet?
  • What was the theoretical background behind this assessment? (e.g. addresses importance of 'negative cognitions', such as intrusions, inaccurate, sustained thoughts)
  • How was the scale developed? What was the theoretical background behind it?
  • How are these questions reflected in applications to theories, such as cognitive behavioral therapy (CBT)?
  • If there were previous versions, when were they published?
  • Discuss the theoretical ideas behind the changes

Impact

[edit]
  • What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
  • What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?

Use in other populations

[edit]
  • How widely has it been used? Has it been translated into different languages? Which languages?

Research

[edit]
  • Any recent research done that is pertinent?

Limitations

[edit]
  • If self report, what are usual limitations of self-report?
  • State the status of this assessment (is it copyrighted? If free, link to it).

See also

[edit]

Here, it would be good to link to any related articles on Wikipedia. As we create more assessment pages, this should grow.

For instance:

[edit]

Example page

[edit]

RCADS Section

[edit]

This is a draft section for the RCADS.

https://www.childfirst.ucla.edu/wp-content/uploads/sites/163/2018/03/RCADSUsersGuide20150701.pdf

References

[edit]
  1. ^ Brodie, Ian (2011). The Lord of the Rings Location Guidebook (2nd ed.). Auckland, NZ: HarperCollins. pp. 98–103. ISBN 978-1869505301.
  2. ^ Achenbach, TM; McConaughy, SH; Howell, CT (March 1987). "Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity". Psychological Bulletin. 101 (2): 213–32. doi:10.1037/0033-2909.101.2.213. PMID 3562706.
  3. ^ a b c d e Depue, Richard A.; Slater, Judith F.; Wolfstetter-Kausch, Heidi; Klein, Daniel; Goplerud, Eric; Farr, David (1981). "A behavioral paradigm for identifying persons at risk for bipolar depressive disorder: A conceptual framework and five validation studies". Journal of Abnormal Psychology. 90 (5): 381–437. doi:10.1037/0021-843X.90.5.381. PMID 7298991.
  4. ^ Klein, DN; Dickstein, S; Taylor, EB; Harding, K (February 1989). "Identifying chronic affective disorders in outpatients: validation of the General Behavior Inventory". Journal of Consulting and Clinical Psychology. 57 (1): 106–11. doi:10.1037//0022-006x.57.1.106. PMID 2925959.
  5. ^ Mesman, Esther; Nolen, Willem A.; Reichart, Catrien G.; Wals, Marjolein; Hillegers, Manon H.J. (May 2013). "The Dutch Bipolar Offspring Study: 12-Year Follow-Up". American Journal of Psychiatry. 170 (5): 542–549. doi:10.1176/appi.ajp.2012.12030401. PMID 23429906.
  6. ^ Reichart, CG; van der Ende, J; Wals, M; Hillegers, MH; Nolen, WA; Ormel, J; Verhulst, FC (December 2005). "The use of the GBI as predictor of bipolar disorder in a population of adolescent offspring of parents with a bipolar disorder". Journal of Affective Disorders. 89 (1–3): 147–55. doi:10.1016/j.jad.2005.09.007. PMID 16260043.
  7. ^ Depue, RA; Kleiman, RM; Davis, P; Hutchinson, M; Krauss, SP (February 1985). "The behavioral high-risk paradigm and bipolar affective disorder, VIII: Serum free cortisol in nonpatient cyclothymic subjects selected by the General Behavior Inventory". The American Journal of Psychiatry. 142 (2): 175–81. doi:10.1176/ajp.142.2.175. PMID 3970242.
  8. ^ Klein, DN; Depue, RA (August 1984). "Continued impairment in persons at risk for bipolar affective disorder: results of a 19-month follow-up study". Journal of Abnormal Psychology. 93 (3): 345–7. doi:10.1037//0021-843x.93.3.345. PMID 6470321.
  9. ^ a b Pendergast, Laura L.; Youngstrom, Eric A.; Brown, Christopher; Jensen, Dane; Abramson, Lyn Y.; Alloy, Lauren B. (2015). "Structural invariance of General Behavior Inventory (GBI) scores in Black and White young adults". Psychological Assessment. 27 (1): 21–30. doi:10.1037/pas0000020. PMC 4355320. PMID 25222430.
  10. ^ a b Danielson, CK; Youngstrom, EA; Findling, RL; Calabrese, JR (February 2003). "Discriminative validity of the general behavior inventory using youth report". Journal of Abnormal Child Psychology. 31 (1): 29–39. doi:10.1023/a:1021717231272. PMID 12597697. S2CID 14546936.
  11. ^ Findling, RL; Youngstrom, EA; Danielson, CK; DelPorto-Bedoya, D; Papish-David, R; Townsend, L; Calabrese, JR (February 2002). "Clinical decision-making using the General Behavior Inventory in juvenile bipolarity". Bipolar Disorders. 4 (1): 34–42. doi:10.1034/j.1399-5618.2002.40102.x. PMID 12047493.
  12. ^ Youngstrom, Eric A.; Genzlinger, Jacquelynne E.; Egerton, Gregory A.; Van Meter, Anna R. (2015). "Multivariate meta-analysis of the discriminative validity of caregiver, youth, and teacher rating scales for pediatric bipolar disorder: Mother knows best about mania". Archives of Scientific Psychology. 3 (1): 112–137. doi:10.1037/arc0000024.
  13. ^ Alloy, LB; Abramson, LY; Hogan, ME; Whitehouse, WG; Rose, DT; Robinson, MS; Kim, RS; Lapkin, JB (August 2000). "The Temple-Wisconsin Cognitive Vulnerability to Depression Project: lifetime history of axis I psychopathology in individuals at high and low cognitive risk for depression". Journal of Abnormal Psychology. 109 (3): 403–18. doi:10.1037/0021-843X.109.3.403. PMID 11016110.
  14. ^ Klein, Daniel N.; Dickstein, Susan; Taylor, Ellen B.; Harding, Kathryn (1989). "Identifying chronic affective disorders in outpatients: Validation of the General Behavior Inventory". Journal of Consulting and Clinical Psychology. 57 (1): 106–111. doi:10.1037/0022-006X.57.1.106.
  15. ^ Youngstrom, EA; Findling, RL; Danielson, CK; Calabrese, JR (June 2001). "Discriminative validity of parent report of hypomanic and depressive symptoms on the General Behavior Inventory". Psychological Assessment. 13 (2): 267–76. doi:10.1037/1040-3590.13.2.267. PMID 11433802.
  16. ^ Findling, RL; Youngstrom, EA; McNamara, NK; Stansbrey, RJ; Wynbrandt, JL; Adegbite, C; Rowles, BM; Demeter, CA; Frazier, TW; Calabrese, JR (January 2012). "Double-blind, randomized, placebo-controlled long-term maintenance study of aripiprazole in children with bipolar disorder". The Journal of Clinical Psychiatry. 73 (1): 57–63. doi:10.4088/JCP.11m07104. PMID 22152402.
  17. ^ a b Youngstrom, E; Zhao, J; Mankoski, R; Forbes, RA; Marcus, RM; Carson, W; McQuade, R; Findling, RL (March 2013). "Clinical significance of treatment effects with aripiprazole versus placebo in a study of manic or mixed episodes associated with pediatric bipolar I disorder". Journal of Child and Adolescent Psychopharmacology. 23 (2): 72–9. doi:10.1089/cap.2012.0024. PMC 3696952. PMID 23480324.
  18. ^ Ong, ML; Youngstrom, EA; Chua, JJ; Halverson, TF; Horwitz, SM; Storfer-Isser, A; Frazier, TW; Fristad, MA; Arnold, LE; Phillips, ML; Birmaher, B; Kowatch, RA; Findling, RL; LAMS, Group (1 July 2016). "Comparing the CASI-4R and the PGBI-10 M for Differentiating Bipolar Spectrum Disorders from Other Outpatient Diagnoses in Youth". Journal of Abnormal Child Psychology. 45 (3): 611–623. doi:10.1007/s10802-016-0182-4. PMC 5685560. PMID 27364346. {{cite journal}}: |first14= has generic name (help)

Reliability

[edit]

The rubrics for evaluating reliability and validity are now on published pages in Wikiversity. You will evaluate the instrument based on these rubrics. Then, you will delete the code for the rubric and complete the table (located after the rubrics). Don't forget to adjust the headings once you copy/paste the table in!

An example using the table from the General Behavior Inventory is attached below.

Rubric tables

[edit]

Reliability

[edit]

Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.

Rubric for evaluating norms and reliability for the General Behavior Inventory (table from Youngstrom et al., extending Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Norms Adequate Multiple convenience samples and research studies, including both clinical and nonclinical samples[citation needed]
Internal consistency (Cronbach’s alpha, split half, etc.) Excellent; too good for some contexts Alphas routinely over .94 for both scales, suggesting that scales could be shortened for many uses[citation needed]
Inter-rater reliability Not applicable Designed originally as a self-report scale; parent and youth report correlate about the same as cross-informant scores correlate in general[1]
Test-retest reliability (stability Good r = .73 over 15 weeks. Evaluated in initial studies,[2] with data also show high stability in clinical trials[citation needed]
Repeatability Not published No published studies formally checking repeatability

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Validity

[edit]

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.

Evaluation of validity and utility for the General Behavior Inventory (table from Youngstrom et al., unpublished, extended from Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Content validity Excellent Covers both DSM diagnostic symptoms and a range of associated features[2]
Construct validity (e.g., predictive, concurrent, convergent, and discriminant validity) Excellent Shows convergent validity with other symptom scales, longitudinal prediction of development of mood disorders,[3][4][5] criterion validity via metabolic markers[2][6] and associations with family history of mood disorder.[7] Factor structure complicated;[2][8] the inclusion of “biphasic” or “mixed” mood items creates a lot of cross-loading
Discriminative validity Excellent Multiple studies show that GBI scores discriminate cases with unipolar and bipolar mood disorders from other clinical disorders[2][9][10] effect sizes are among the largest of existing scales[11]
Validity generalization Good Used both as self-report and caregiver report; used in college student[8][12] as well as outpatient[9][13][14] and inpatient clinical samples; translated into multiple languages with good reliability
Treatment sensitivity Good Multiple studies show sensitivity to treatment effects comparable to using interviews by trained raters, including placebo-controlled, masked assignment trials[15][16] Short forms appear to retain sensitivity to treatment effects while substantially reducing burden[16][17]
Clinical utility Good Free (public domain), strong psychometrics, extensive research base. Biggest concerns are length and reading level. Short forms have less research, but are appealing based on reduced burden and promising data

Development and history

[edit]
  • Why was this instrument developed? Why was there a need to do so? What need did it meet?
  • What was the theoretical background behind this assessment? (e.g. addresses importance of 'negative cognitions', such as intrusions, inaccurate, sustained thoughts)
  • How was the scale developed? What was the theoretical background behind it?
  • How are these questions reflected in applications to theories, such as cognitive behavioral therapy (CBT)?
  • If there were previous versions, when were they published?
  • Discuss the theoretical ideas behind the changes

Impact

[edit]
  • What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
  • What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?

Use in other populations

[edit]
  • How widely has it been used? Has it been translated into different languages? Which languages?

Research

[edit]
  • Any recent research done that is pertinent?

Limitations

[edit]
  • If self report, what are usual limitations of self-report?
  • State the status of this assessment (is it copyrighted? If free, link to it).

See also

[edit]

Here, it would be good to link to any related articles on Wikipedia. As we create more assessment pages, this should grow.

For instance:

[edit]

Example page

[edit]

References

[edit]
  1. ^ Achenbach, TM; McConaughy, SH; Howell, CT (March 1987). "Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity". Psychological Bulletin. 101 (2): 213–32. doi:10.1037/0033-2909.101.2.213. PMID 3562706.
  2. ^ a b c d e Depue, Richard A.; Slater, Judith F.; Wolfstetter-Kausch, Heidi; Klein, Daniel; Goplerud, Eric; Farr, David (1981). "A behavioral paradigm for identifying persons at risk for bipolar depressive disorder: A conceptual framework and five validation studies". Journal of Abnormal Psychology. 90 (5): 381–437. doi:10.1037/0021-843X.90.5.381. PMID 7298991.
  3. ^ Klein, DN; Dickstein, S; Taylor, EB; Harding, K (February 1989). "Identifying chronic affective disorders in outpatients: validation of the General Behavior Inventory". Journal of Consulting and Clinical Psychology. 57 (1): 106–11. doi:10.1037//0022-006x.57.1.106. PMID 2925959.
  4. ^ Mesman, Esther; Nolen, Willem A.; Reichart, Catrien G.; Wals, Marjolein; Hillegers, Manon H.J. (May 2013). "The Dutch Bipolar Offspring Study: 12-Year Follow-Up". American Journal of Psychiatry. 170 (5): 542–549. doi:10.1176/appi.ajp.2012.12030401. PMID 23429906.
  5. ^ Reichart, CG; van der Ende, J; Wals, M; Hillegers, MH; Nolen, WA; Ormel, J; Verhulst, FC (December 2005). "The use of the GBI as predictor of bipolar disorder in a population of adolescent offspring of parents with a bipolar disorder". Journal of Affective Disorders. 89 (1–3): 147–55. doi:10.1016/j.jad.2005.09.007. PMID 16260043.
  6. ^ Depue, RA; Kleiman, RM; Davis, P; Hutchinson, M; Krauss, SP (February 1985). "The behavioral high-risk paradigm and bipolar affective disorder, VIII: Serum free cortisol in nonpatient cyclothymic subjects selected by the General Behavior Inventory". The American Journal of Psychiatry. 142 (2): 175–81. doi:10.1176/ajp.142.2.175. PMID 3970242.
  7. ^ Klein, DN; Depue, RA (August 1984). "Continued impairment in persons at risk for bipolar affective disorder: results of a 19-month follow-up study". Journal of Abnormal Psychology. 93 (3): 345–7. doi:10.1037//0021-843x.93.3.345. PMID 6470321.
  8. ^ a b Pendergast, Laura L.; Youngstrom, Eric A.; Brown, Christopher; Jensen, Dane; Abramson, Lyn Y.; Alloy, Lauren B. (2015). "Structural invariance of General Behavior Inventory (GBI) scores in Black and White young adults". Psychological Assessment. 27 (1): 21–30. doi:10.1037/pas0000020. PMC 4355320. PMID 25222430.
  9. ^ a b Danielson, CK; Youngstrom, EA; Findling, RL; Calabrese, JR (February 2003). "Discriminative validity of the general behavior inventory using youth report". Journal of Abnormal Child Psychology. 31 (1): 29–39. doi:10.1023/a:1021717231272. PMID 12597697. S2CID 14546936.
  10. ^ Findling, RL; Youngstrom, EA; Danielson, CK; DelPorto-Bedoya, D; Papish-David, R; Townsend, L; Calabrese, JR (February 2002). "Clinical decision-making using the General Behavior Inventory in juvenile bipolarity". Bipolar Disorders. 4 (1): 34–42. doi:10.1034/j.1399-5618.2002.40102.x. PMID 12047493.
  11. ^ Youngstrom, Eric A.; Genzlinger, Jacquelynne E.; Egerton, Gregory A.; Van Meter, Anna R. (2015). "Multivariate meta-analysis of the discriminative validity of caregiver, youth, and teacher rating scales for pediatric bipolar disorder: Mother knows best about mania". Archives of Scientific Psychology. 3 (1): 112–137. doi:10.1037/arc0000024.
  12. ^ Alloy, LB; Abramson, LY; Hogan, ME; Whitehouse, WG; Rose, DT; Robinson, MS; Kim, RS; Lapkin, JB (August 2000). "The Temple-Wisconsin Cognitive Vulnerability to Depression Project: lifetime history of axis I psychopathology in individuals at high and low cognitive risk for depression". Journal of Abnormal Psychology. 109 (3): 403–18. doi:10.1037/0021-843X.109.3.403. PMID 11016110.
  13. ^ Klein, Daniel N.; Dickstein, Susan; Taylor, Ellen B.; Harding, Kathryn (1989). "Identifying chronic affective disorders in outpatients: Validation of the General Behavior Inventory". Journal of Consulting and Clinical Psychology. 57 (1): 106–111. doi:10.1037/0022-006X.57.1.106.
  14. ^ Youngstrom, EA; Findling, RL; Danielson, CK; Calabrese, JR (June 2001). "Discriminative validity of parent report of hypomanic and depressive symptoms on the General Behavior Inventory". Psychological Assessment. 13 (2): 267–76. doi:10.1037/1040-3590.13.2.267. PMID 11433802.
  15. ^ Findling, RL; Youngstrom, EA; McNamara, NK; Stansbrey, RJ; Wynbrandt, JL; Adegbite, C; Rowles, BM; Demeter, CA; Frazier, TW; Calabrese, JR (January 2012). "Double-blind, randomized, placebo-controlled long-term maintenance study of aripiprazole in children with bipolar disorder". The Journal of Clinical Psychiatry. 73 (1): 57–63. doi:10.4088/JCP.11m07104. PMID 22152402.
  16. ^ a b Youngstrom, E; Zhao, J; Mankoski, R; Forbes, RA; Marcus, RM; Carson, W; McQuade, R; Findling, RL (March 2013). "Clinical significance of treatment effects with aripiprazole versus placebo in a study of manic or mixed episodes associated with pediatric bipolar I disorder". Journal of Child and Adolescent Psychopharmacology. 23 (2): 72–9. doi:10.1089/cap.2012.0024. PMC 3696952. PMID 23480324.
  17. ^ Ong, ML; Youngstrom, EA; Chua, JJ; Halverson, TF; Horwitz, SM; Storfer-Isser, A; Frazier, TW; Fristad, MA; Arnold, LE; Phillips, ML; Birmaher, B; Kowatch, RA; Findling, RL; LAMS, Group (1 July 2016). "Comparing the CASI-4R and the PGBI-10 M for Differentiating Bipolar Spectrum Disorders from Other Outpatient Diagnoses in Youth". Journal of Abnormal Child Psychology. 45 (3): 611–623. doi:10.1007/s10802-016-0182-4. PMC 5685560. PMID 27364346. {{cite journal}}: |first14= has generic name (help)


CPSS stuff from students' sandboxes

[edit]

Elizabeth Suddreth

[edit]

The Child PTSD Symptom Scale has been been translated and validated for use in several other populations, including Spanish-speaking, Turkish, Israeli, and Nepalese populations. [1][2][3][4][5] While the Spanish CPSS showed good internal consistency, it did not show sufficient construct validity in comparison to the English version.

In the Nepali version of the CPSS, researchers found that it demonstrated moderate to good validity in comparison to the English version of the scale. Specifically, there were two items that did not apply for Nepali children, including avoidance of places/people and lack of interest.

Internal Consistency Construct Validity
Spanish good; .88[1]
Turkish good; .89[4]
Hebrew good; .91[3]
Nepal .86[2]

Elizabeth -- you could go ahead and add the citations for the specific studies for the Dutch,Turkish, etc. and I will play with packaging later. You and Julia could also work to merge the content each of you founds. -EAY

Julia Whitfield

[edit]

Cite the paper with Sophie. [6]

Reliability

[edit]

Reliability refers to whether the scores are reproducible. Not all of the different types of reliability apply to the way that the CAGE is typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of the CAGE; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview).

Explanation with references</tbody>

Rubric for evaluating norms and reliability for the CAGE questionnaire* <tbody role="presentation">
Criterion Rating (adequate, good, excellent, too good*)
Norms Not applicable Normative data are not gathered for screening measures of this sort
Internal_consistency Not reported A meta-analysis of 22 studies reported the median internal consistency was

α= 0.74.[7]

Inter-rater reliability Not usually reported Inter-rater reliability studies examine whether people's responses are scored the same by different raters, or whether people disclose the same information to different interviewers. These may not have been done yet with the CAGE; however, other research has shown that interviewer characteristics can change people's tendencies to disclose information about sensitive or stigmatized behaviors, such as alcohol or drug use.[8][9]
Test-retest reliability (stability) Not usually reported Retest reliability studies help measure whether things behave more as a state or trait; they are rarely done with screening measures
Repeatability Not reported Repeatability studies would examine whether scores tend to shift over time; these are rarely done with screening tests

Validity

[edit]

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Rating (adequate, good, excellent, too good*)</tbody>

Evaluation of validity and utility for the CAGE questionnaire* <tbody role="presentation">
Criterion Explanation with references
Content_validity Adequate Items are face valid; not clear that they comprehensively cover all aspects of problem drinking
Construct_validity (e.g., predictive, concurrent, convergent, and discriminant validity) Good Multiple studies show screening and predictive value across a range of age groups and samples
Discriminative validity Excellent Studies not usually reporting AUCs, but combined sensitivity and specificity often excellent
Validity generalization Excellent Multiple studies show screening and predictive value across a range of age groups and samples
Treatment sensitivity Not applicable CAGE not intended for use as an outcome measure
Clinical utility Good Free (public domain), extensive research base, brief.

*Table from Youngstrom et al., extending Hunsley & Mash, 2008;[10] *indicates new construct or category

Rachel Peltzer

[edit]

<<<Rachel -- this is a great start! First paragraph looks great for utility. Second paragraph may have parts that would fit in validity (above) with the details about accuracy. The underidentification part of paragraph 2 could be Utility, talking about why we need free tools to improve accuracy. Third paragraph could go into validity, too; I am thinking about ways of tweaking it for utility, too. Think of utility as a more "bottom line" -- do the risks (false positives, false negatives), benefits (it's much better than not using a rating scale), and costs (it's free!) make it clinically useful or not? >>>

Utility

The CPSS provides a symptom severity score by assessing PTSD symptoms in three clusters, reexperiencing, avoidance, and arousal (the three clusters defined by the DSM-IV). Its classification as a self-report measure requires "minimal clinician and administration time". It should be seen as a practical tool for use in schools, communities, and research settings. [11]

"Results suggest a large discrepancy between rates of probable PTSD identified through standardized assessment and during the emergency room psychiatric evaluation (28.6% vs. 2.2%). Upon discharge, those with probable PTSD were more than those without to be assigned a diagnosis of PTSD (45% vs. 7.1%), a comorbid diagnosis of major depressive disorder (30% vs .14.3%), to be prescribed an antidepressant medication (52.5% vs. 33.7%), and to be prescribed more medications. The underidentification of trauma exposure and PTSD has important implications for the care of adolescents given that accurate diagnosis is a prerequisite for providing effective care. Improved methods for identifying trauma-related problems in standard clinical practice are needed" [12]

The CPSS scale assesses avoidance and change of activities, which may not accurately reflect pathology. This could possibly result in higher PTSD prevalence estimations. In a study, the CPSS scale correctly classified 72.2% of children. Nearly one quarter of children were misclassified and 5.6% were misclassified (false negative) [13]

asdfads

-

toms in the three clusters of

DSM–IV

(reexperiencing,

avoidance, and arousal) and thus provides a PTSD

symptom severity score. Second, it includes a seven-

item assessment of functional impairment. As with

other self-report measures, the CPSS requires minimal

clinician and administration time, and it therefore con

-

stitutes a practical screening tool in school, community,

and research settings for groups of children exposed to

trauma

The Child PTSD Symptom Scale (CPSS) is a 26-item self-report measure that assesses PTSD diagnostic criteria and symptom severity in children ages 8 to 18. It includes 2 event items, 17 symptom items, and 7 functional impairment items. Symptom items are rated on a 4-point frequency scale (0 = "not at all" to 3 = "5 or more times a week").

Reliability

[edit]

Reliability refers to whether the scores are reproducible. Not all of the different types of reliability apply to the way that the CAGE is typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of the CAGE; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview).

Explanation with references</tbody>

Rubric for evaluating norms and reliability for the CAGE questionnaire* <tbody role="presentation">
Criterion Rating (adequate, good, excellent, too good*)
Norms Not applicable Normative data are not gathered for screening measures of this sort
Internal_consistency Not reported
Inter-rater reliability Not usually reported Inter-rater reliability studies examine whether people's responses are scored the same by different raters, or whether people disclose the same information to different interviewers. These may not have been done yet with the CAGE; however, other research has shown that interviewer characteristics can change people's tendencies to disclose information about sensitive or stigmatized behaviors, such as alcohol or drug use.[14][15]
Test-retest reliability (stability) Not usually reported Retest reliability studies help measure whether things behave more as a state or trait; they are rarely done with screening measures
Repeatability Not reported Repeatability studies would examine whether scores tend to shift over time; these are rarely done with screening tests

Validity

[edit]

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures such as the CAGE, diagnostic accuracy and discriminative validity are probably the most useful ways of looking at validity.

Rating (adequate, good, excellent, too good*)</tbody>

Evaluation of validity and utility for the CAGE questionnaire* <tbody role="presentation">
Criterion Explanation with references
Content_validity Adequate Items are face valid; not clear that they comprehensively cover all aspects of problem drinking
Construct_validity (e.g., predictive, concurrent, convergent, and discriminant validity) Good Multiple studies show screening and predictive value across a range of age groups and samples
Discriminative validity Excellent Studies not usually reporting AUCs, but combined sensitivity and specificity often excellent
Validity generalization Excellent Multiple studies show screening and predictive value across a range of age groups and samples
Treatment sensitivity Not applicable CAGE not intended for use as an outcome measure
Clinical utility Good Free (public domain), extensive research base, brief.

*Table from Youngstrom et al., extending Hunsley & Mash, 2008;[16] *indicates new construct or category

Transclusion Practice

[edit]


General Behavior Inventory Version Development

General Behavior Inventory (GBI)

[edit]

The GBI was originally made as a self-report instrument for college students and adults to use to describe their own history of mood symptoms. The original item set included clinical characteristics and associated features in addition to the diagnostic symptoms of manic and depressive states in the current versions of the Diagnostic and Statistical Manual (DSM) of the American Psychiatric Association. The first set of 69 items was increased to 73, with the final version having 73 mood items and 6 additional questions to check the validity of responses (but which did not figure in the scale scores). The self report version of the GBI has been used in an extensive program of research, accruing evidence of many facets of validity. Because of its length and high reading level, there also have been many efforts to develop short forms of the GBI.

7 Up 7 Down Inventory (7U7D)

[edit]

The 7 Up-7 Down (7U7D)[17] is a 14-item measure of manic and depressive tendencies that was carved from the full length GBI. This version is designed to be applicable for both youths and adults, and to improve separation between both mania and depressive conditions. It was developed via factor analysis from nine separate samples pooled into two age groups, ensuring applicability for use in youth and adults.[17]

A sleep scale also has been carved from the GBI, using the seven items that ask about anything directly related to sleep.

Parent report on the GBI (P-GBI)

[edit]

The P-GBI[18] is an adaptation of the GBI, consisting of 73 Likert scale items rated on a scale from 0 ("Never or Hardly Ever") to 3 ("Very often or Almost Constantly"). It consists of two scales: a depressive symptoms (46 items) and a hypomanic/biphasic (mixed) symptoms (28 items).[19]

Parent short forms

[edit]
Parent GBI-10-Item Mania Scale
SynonymsPGBI-10M
LOINC62720-8

Again, due to the length of the full version, several short forms have been built and tested in multiple samples that may be more convenient to use in clinical work. These include 10 item mania, two alternate 10 item depression forms, and the seven item Sleep scale. All have performed as well or better than the self-report version when completed by an adult familiar with the youth's behavior (typically a parent).

The PGBI-10M [19] is a brief (10-item) version of the PGBI that was validated for clinical use for patients presenting with a variety of different diagnoses, including frequent comorbid conditions. It is administered to parents for them to rate their children between ages 5–17. The 10 items include symptoms such as elated mood, high energy, irritability and rapid changes in mood and energy as indicators of potential juvenile bipolar disorder.[19] The PhenX Toolkit uses this instrument as its child protocol for Hypomania/Mania Symptoms.[20]

Teacher report on the GBI

[edit]

One study had a large sample of teachers complete the GBI to describe the mood and behavior of youths age 5 to 18 years old. The results indicated that there were many items that teachers did not have an opportunity to observe the behavior (such as the items asking about sleep), and others that teachers often chose to skip. Even after shortening the item list to those that teachers could report about, the validity results were modest even though the internal consistency reliability was high. The results suggested that it was challenging for teachers to tell the difference between hypomanic symptoms and symptoms attributable to attention-deficit/hyperactivity disorder, which is much more common in the classroom. The results aligned with findings from a large meta-analysis that teacher report had the lowest average validity across all mania scales compared to adolescent or parent report on the same scales.[21] Based on these results, current recommendations are to concentrate on parent and youth report, and not use teacher report as a way of measuring hypomanic symptoms in youths.

[edit]

References

[edit]
  1. ^ a b Meyer, Rika M. L.; Gold, Jeffrey I.; Beas, Virginia N.; Young, Christina M.; Kassam-Adams, Nancy (2014-08-17). "Psychometric Evaluation of the Child PTSD Symptom Scale in Spanish and English". Child Psychiatry & Human Development. 46 (3): 438–444. doi:10.1007/s10578-014-0482-2. ISSN 0009-398X. PMC 4663982. PMID 25129027.
  2. ^ a b Kohrt, Brandon A.; Jordans, Mark JD; Tol, Wietse A.; Luitel, Nagendra P.; Maharjan, Sujen M.; Upadhaya, Nawaraj (2011-01-01). "Validation of cross-cultural child mental health and psychosocial research instruments: adapting the Depression Self-Rating Scale and Child PTSD Symptom Scale in Nepal". BMC Psychiatry. 11 (1): 127. doi:10.1186/1471-244X-11-127. ISSN 1471-244X. PMC 3162495. PMID 21816045.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  3. ^ a b Rachamim, Lilach; Helpman, Liat; Foa, Edna B.; Aderka, Idan M.; Gilboa-Schechtman, Eva (2011-06-01). "Validation of the child posttraumatic symptom scale in a sample of treatment-seeking Israeli youth". Journal of Traumatic Stress. 24 (3): 356–360. doi:10.1002/jts.20639. ISSN 1573-6598. PMID 21567475.
  4. ^ a b Kadak, Muhammed Tayyib; Boysan, Murat; Ceylan, Nesrin; Çeri, Veysi (2014). "Psychometric properties of the Turkish version of the Child PTSD Symptom Scale". Comprehensive Psychiatry. 55 (6): 1435–1441. doi:10.1016/j.comppsych.2014.05.001. PMID 24928279.
  5. ^ Diehle, Julia; Roos, Carlijn de; Boer, Frits; Lindauer, Ramón J. L. (2013-05-02). "A cross-cultural validation of the Clinician Administered PTSD Scale for Children and Adolescents in a Dutch population". European Journal of Psychotraumatology. 4: 19896. doi:10.3402/ejpt.v4i0.19896. ISSN 2000-8066. PMC 3644059. PMID 23671763.
  6. ^ Youngstrom, Eric A.; Choukas-Bradley, Sophia; Calhoun, Casey D.; Jensen-Doss, Amanda (February 2015). "Clinical Guide to the Evidence-Based Assessment Approach to Diagnosis and Treatment". Cognitive and Behavioral Practice. 22 (1): 20–35. doi:10.1016/j.cbpra.2013.12.005.
  7. ^ Shields, Alan L.; Caruso, John C. (2004-04-01). "A Reliability Induction and Reliability Generalization Study of the Cage Questionnaire". Educational and Psychological Measurement. 64 (2): 254–270. doi:10.1177/0013164403261814. ISSN 0013-1644. S2CID 144398501.
  8. ^ Griensven, Frits van; Naorat, Sataphana; Kilmarx, Peter H.; Jeeyapant, Supaporn; Manopaiboon, Chomnad; Chaikummao, Supaporn; Jenkins, Richard A.; Uthaivoravit, Wat; Wasinrapee, Punneporn (2006-02-01). "Palmtop-assisted Self-Interviewing for the Collection of Sensitive Behavioral Data: Randomized Trial with Drug Use Urine Testing". American Journal of Epidemiology. 163 (3): 271–278. doi:10.1093/aje/kwj038. ISSN 0002-9262. PMID 16357109.
  9. ^ Gribble, James N.; Miller, Heather G.; Cooley, Philip C.; Catania, Joseph A.; Pollack, Lance; Turner, Charles F. (2000-01-01). "The Impact of T-ACASI Interviewing on Reported Drug Use among Men Who Have Sex with Men". Substance Use & Misuse. 35 (6–8): 869–890. doi:10.3109/10826080009148425. ISSN 1082-6084. PMID 10847215. S2CID 26141403.
  10. ^ Hunsley, John; Mash, Eric (2008). A Guide to Assessments that Work. New York, NY: Oxford Press. pp. 1–696. ISBN 978-0195310641.
  11. ^ Foa, Edna B.; Johnson, Kelly M.; Feeny, Norah C.; Treadwell, Kimberli R. H. (2001-08-01). "The Child PTSD Symptom Scale: A Preliminary Examination of its Psychometric Properties". Journal of Clinical Child & Adolescent Psychology. 30 (3): 376–384. doi:10.1207/S15374424JCCP3003_9. ISSN 1537-4416. PMID 11501254. S2CID 9334984.
  12. ^ Havens, Jennifer F.; Gudiño, Omar G.; Biggs, Emily A.; Diamond, Ursula N.; Weis, J. Rebecca; Cloitre, Marylene (2012-04-01). "Identification of trauma exposure and PTSD in adolescent psychiatric inpatients: An exploratory study". Journal of Traumatic Stress. 25 (2): 171–178. doi:10.1002/jts.21683. ISSN 1573-6598. PMC 3742006. PMID 22522731.
  13. ^ Kohrt, Brandon A.; Jordans, Mark JD; Tol, Wietse A.; Luitel, Nagendra P.; Maharjan, Sujen M.; Upadhaya, Nawaraj (2011-01-01). "Validation of cross-cultural child mental health and psychosocial research instruments: adapting the Depression Self-Rating Scale and Child PTSD Symptom Scale in Nepal". BMC Psychiatry. 11 (1): 127. doi:10.1186/1471-244X-11-127. ISSN 1471-244X. PMC 3162495. PMID 21816045.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  14. ^ Griensven, Frits van; Naorat, Sataphana; Kilmarx, Peter H.; Jeeyapant, Supaporn; Manopaiboon, Chomnad; Chaikummao, Supaporn; Jenkins, Richard A.; Uthaivoravit, Wat; Wasinrapee, Punneporn (2006-02-01). "Palmtop-assisted Self-Interviewing for the Collection of Sensitive Behavioral Data: Randomized Trial with Drug Use Urine Testing". American Journal of Epidemiology. 163 (3): 271–278. doi:10.1093/aje/kwj038. ISSN 0002-9262. PMID 16357109.
  15. ^ Gribble, James N.; Miller, Heather G.; Cooley, Philip C.; Catania, Joseph A.; Pollack, Lance; Turner, Charles F. (2000-01-01). "The Impact of T-ACASI Interviewing on Reported Drug Use among Men Who Have Sex with Men". Substance Use & Misuse. 35 (6–8): 869–890. doi:10.3109/10826080009148425. ISSN 1082-6084. PMID 10847215. S2CID 26141403.
  16. ^ Hunsley, John; Mash, Eric (2008). A Guide to Assessments that Work. New York, NY: Oxford Press. pp. 1–696. ISBN 978-0195310641.
  17. ^ a b Youngstrom, Eric A.; Murray, Greg; Johnson, Sheri L.; Findling, Robert L. (2016-12-01). "The 7 Up 7 Down Inventory: A 14-item measure of manic and depressive tendencies carved from the General Behavior Inventory". Psychological Assessment. 25 (4): 1377–1383. doi:10.1037/a0033975. ISSN 1040-3590. PMC 3970320. PMID 23914960.
  18. ^ Youngstrom, Eric A.; Findling, Robert L.; Danielson, Carla Kmett; Calabrese, Joseph R. (June 2001). "Discriminative validity of parent report of hypomanic and depressive symptoms on the General Behavior Inventory". Psychological Assessment. 13 (2): 267–276. doi:10.1037/1040-3590.13.2.267. PMID 11433802.
  19. ^ a b c Youngstrom, Eric A.; Frazier, Thomas W.; Demeter, Christine; Calabrese, Joseph R.; Findling, Robert L. (May 2008). "Developing a Ten Item Mania Scale from the Parent General Behavior Inventory for Children and Adolescents". Journal of Clinical Psychiatry. 69 (5): 831–9. doi:10.4088/jcp.v69n0517. PMC 2777983. PMID 18452343.
  20. ^ "Protocol Overview: Hypomania/Mania Symptoms - Child". PhenX Toolkit, Ver 19.0. RTI International. 17 January 2017.
  21. ^ Cite error: The named reference Youngstrom_et_al-2015 was invoked but never defined (see the help page).


The Society of Clinical Child and Adolescent Psychology (SCCAP) is an academic society for legal and forensic psychologists, as well as general psychologists who are interested in the application of psychology to the law. AP–LS serves as Division 41 of the American Psychological Association and publishes the academic journal Law and Human Behavior.

Goals and Publications

[edit]

The Society of Clinical Child and Adolescent Psychology has three main goals**** The AP-LS publishes the Journal of Law and Human Behavior and a newsletter entitled AP-LS News [1]

History

[edit]

The American Psychology–Law Society (AP-LS) was developed at a San Francisco meeting in September 1968, by founders Eric Dreikurs and Jay Ziskin. The society was created for forensic and clinical psychologists. The first newsletter was published in October 1968. The original constitution was published later that year, and outlined the reasons for creating the society. These were to promote the study of law, influence legislation and policy, and to promote psychology in legal processes. A year after the San Francisco meeting, the AP-LS had 101 members. Most of the members were clinical psychologists, and nine of these original members were women. This group had a stronger focus on psychology, as opposed to the Law and Society Association, which has similar goals, but a broader focus.[2]

There was a controversy in 1971, when the founder, Jay Ziskin, wrote a book which stated that psychological evidence often did not meet reasonable criteria and should not be used in court of law. This statement sprouted debate in the society and caused the society’s popularity to decline for a while. After this, June Louin Tapp became president of the society.[3]

In 1976, Bruce Sales became the society’s president, and helped refocus the society on the field of psychology and law. Sales had the goal to have the American Psychology–Law Society be the driving force behind the group. Sales, along with Ronald Roesch, helped the group publish many books, including Psychology in the Legal Process, Perspectives in Law and Psychology, and the Journal of Law and Human Behavior.[2]

In the 1980’s, Florence Kaslow asked the group to help develop a certification for forensic psychologists, but the group was not interested. This led Kaslow to create the American Board of Forensic Psychology, which helped keep the American Psychology–Law Society and forensic psychology separated. In the 1980’s Division 41 of the APA began to discuss law and psychology, and began covering many similar policies of the AP-LS. Therefore, in 1983, Division 41 and AP-LS merged together, under the agreement that Law and Human Behavior would be the journal for the group, and that the biennial meetings would continue to be held. The “new AP-LS” allowed for previous presidents to have a second term in the society, including Bruce Sales, who was the first president of the merged society.[4]

Specialty Guidelines

[edit]

In 1991, the Committee on Ethical Guidelines for Forensic Psychologists began working to establish rules for forensic psychologists to follow in the court room. In 1992, the committee released “Specialty Guidelines” for forensic psychologists, on top of the Code Of Conduct that they already were required to follow.[2] Additionally in the 1990s, the society also established the Committee on Careers and Education, to help students find training programs to become psychologists in the legal system. In 1995, they held a conference to discuss education at undergraduate and post doctorate levels, how to offer legal psychology courses in the curriculum, and how to offer students experiences.[4] The AP-LS also provides grants and funding for students who are interested in attending school for law-related psychology.[1]

The Specialty Guidelines for Forensic Psychologists were first published in 1991. They are guidelines for forensic psychologists to encourage professional, quality, and systematic work in the law system and to those who the forensic psychologists serve. These are the only sets of APA-approved guidelines for a specific area of practice. The guidelines cover 11 points – responsibilities, competence, diligence, relationship, fees, informed consent notification and assent, conflicts in practice, privacy confidentiality and privilege, methods and procedures, assessment, and professional and other public communications.

After an extensive revision process, the Specialty Guidelines for Forensic Psychology were updated in 2013.

The Specialty Guidelines may be found on the APA’s website.[5]

Membership

[edit]

The AP-LS is composed of APA members, graduate and undergraduate students, and people in related fields to join the society. The members primarily have an interest in psychology and law issues. Many members are also members of the American Psychological Association, though it is not a requirement. Members gain access to the publications of Law and Human Behavior and the American Psychology-Law newsletter.[1]

Awards and Honors

[edit]

The AP-LS offers many grants and aid for undergraduates, graduates, early careers, and research. In addition to grants, many awards are handed out yearly.[1]

Publications

[edit]
  • Law and Human Behavior: This journal is published six times a year. It discusses the issues that arise in law an psychology, including the legal process, the legal system, and the relationships between these and human behavior.
  • AP-LS News: This is a newsletter that the society publishes three times each year. These news letters update on activities, important legal cases that are occurring, new publications, and emerging topics in the field of psychology and the law.[1]

References

[edit]
  1. ^ a b c d e "American Psychology-Law Society". Retrieved 3 December 2014.
  2. ^ a b c Grisso, Thomas (3 Nov 1991). "A Developmental History of the American Psychology-Law Society". Law and Human Behavior. 15 (3): 213–231. doi:10.1007/BF01061710. S2CID 145608111.
  3. ^ Roesch, Ronald; Hart, Stephen; Ogloff, James. Psychology and Law: The state of the Discipline. pp. 423–424.
  4. ^ a b Wrightsman, Lawrence (2000). Encylopedia of Psychology. New York, NY: American Psychology Association. pp. 493–498.
  5. ^ "Speciality Guidlines for Forensic Psychology". Retrieved 16 December 2014.
[edit]


Category:Psychology-related professional organizations Category:Law-related professional associations Category:Law-related learned societies Category:Forensics organizations Category:Forensic psychology Category:Education Category:Teaching