tag:blogger.com,1999:blog-46072923001389797992018-05-29T00:04:28.676-07:00Multdimensional ScalingMDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-4607292300138979799.post-13945405894383605732009-04-23T14:46:00.000-07:002009-04-23T23:58:33.390-07:00Dominance approach vs. ideal-point approach in item selection<a href="http://www.ohiolink.edu/etd/send-pdf.cgi/Broadfoot%20Alison%20Ann.pdf?acc_num=bgsu1211913274">Dominance approach</a> (Coombs, 1964; Likert, 1932)<div><ul><li><span class="Apple-style-span" style="font-weight: bold;">It is about measuring people's ability</span></li><li>It uses items of high internal consistency.</li><li>Therefore, if a person scores low on one item, he/she should be score low on the total scores as well. Likewise, if I score higher on the item than you do, my ability would be <span class="Apple-style-span" style="font-style: italic;">dominant over</span> your ability.</li><li>In IRT terminology, <a href="http://io.psych.uiuc.edu/irt/dif_main.asp">DIF</a> (Differential Item Functioning) refers to "a difference in the probability of endorsing an item for members of a reference group (e.g., US workers) and a focal group (e.g., Chinese workers) having the same standing on the latent attribute measured by a test." It is related to dominance approach.</li></ul><div>Ideal-point approach (Thurstone, 1928)</div><div><ul><li><span class="Apple-style-span" style="font-weight: bold;">It is about measuring people's attitude</span></li><li>Individuals will endorse an item to the degree that it reflects</li><li>More neutral items should be included</li></ul></div></div>MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com41tag:blogger.com,1999:blog-4607292300138979799.post-79927233721897409352009-04-21T14:27:00.000-07:002009-04-23T14:37:27.373-07:00Metric MDS and softwareMetric MDS include the followings (Borg & Groenen, 2005, p. 203):<div><ul><li>ratio MDS:</li><ul><li>(disparities) = b * (proximities in terms of dissimilarities; short for 'prox' below)</li></ul><br /><li>interval MDS:</li><ul><li>(disparities) = a + b * (prox)</li></ul><br /><li>logarithmic MDS:</li><ul><li>(disparities) = log(prox)<br /></li><li>(disparities) = b * log(prox)<br /></li><li>(disparities) = a + b * log(prox)</li></ul><br /><li>exponential MDS</li><ul><li>(disparities) = exp(prox)</li><li>(disparities) = b * exp(prox)<br /></li><li>(disparities) = a + b * exp(prox)</li></ul><br /><li>power MDS (which includes square root with q = 0.5):</li><ul><li>(disparities) = (prox)^q</li><li>(disparities) = b * (prox)^q<br /></li><li>(disparities) = a + b * (prox)^q</li></ul><br /><li>polynomial MDS (i.e., spline MDS without interior knots)</li><ul><li>(disparities) = a + b * (prox) + c * (prox)^2</li><li>(disparities) = a + b * (prox) + c * (prox)^2 + d * (prox)^3</li></ul></ul></div>However, softwares are not always clear about the kinds of metric MDS they are performing. Based on my own testing as of 04/21/09, here is a table of comparison:<div><br /><table border="1"><thead><tr><td>Software Package</td><td>Program, version, date</td><td>Metric MDS supported</td></tr></thead><tbody><tr><td>MATLAB 7.8.0.347 (R2009a)</td><td>mdscale() 1.1.6.9, 12/01/08<br />Criterion = 'metricstress'</td><td>Ratio only</td></tr><tr><td>smacof in R 0.9-0 (05/24/08) </td><td>smacofSym(), metric = TRUE </td><td>Ratio only</td></tr><tr><td>SPSS 17.0.0 (08/23/08)</td><td>Proxscal version 1.0</td><td>Ratio, Interval, Spline</td></tr><tr><td>SYSTAT 12.02.00</td><td>Multidimensional Scaling<br />Shape = Square (similarities model)</td><td>Interval (Linear), Log, Power</td></tr></tbody></table><br />To date, no program in any of these software packages provide combinations of two or more than two transformations, but these could be very helpful. For example, log + polynomial may be of interest, because log may be used to normalize residuals, while polynomial may be able to pick up the trend of the data. That is,<br /><ul><li>(disparities) = a + b * log(prox) + c * log(prox)^2</li><li>(disparities) = a + b * log(prox) + c * log(prox)^2 + d * log(prox)^3</li></ul></div>MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0tag:blogger.com,1999:blog-4607292300138979799.post-83448195772694329832009-04-18T06:02:00.000-07:002009-04-18T08:15:43.959-07:00Eigendecomposition and Singular Value DecompositionEigenvalue and eigenvector are those satisfying the following eigenequation:<br /><br />matrix(transformation) * eigenvector = eigenvalue * eigenvector<br /><br />Thus, if we can find such a eigenvector and therefore a eigenvalue, their interpretations are: after being linearly transformed by the matrix, eigenvector still has the same direction. <span style="font-weight: bold; font-style: italic;">Eigenvalue</span> can thus be considered some essential part of the matrix, or <span style="font-weight: bold; font-style: italic;">the characteristic value of the matrix</span>. Eigenvector can be considered a tool to extract such essential part of the matrix.<br /><br />A nice explanation can be founded <a href="http://www.math.tku.edu.tw/%7Echinmei/Ulinear/PDF/6-1.pdf">here</a>; see also Borg and Groenen (2005) Chapter 7.<br /><br />Eigendecomposition: matrix <span style="font-weight: bold;">A</span><span style="font-weight: bold;"> = </span><span style="font-weight: bold;">QΛQ'</span><br />Thus, <span style="font-weight: bold;">A</span><span style="font-weight: bold;">Q</span><span style="font-weight: bold;"> = </span><span style="font-weight: bold;">Q</span><span style="font-weight: bold;">ΛQ'Q = </span><span style="font-weight: bold;">Q</span><span style="font-weight: bold;">Λ</span>, where <span style="font-weight: bold;">Λ</span> is a diagonal matrix of eigenvalues<hr />Singular Value Decomposition: matrix <span style="font-weight: bold;">A = PΦQ'</span><br /><br /><span style="font-weight: bold;">P</span> is a matrix of left singular vectors, <span style="font-weight: bold;">Φ</span> is a diagonal matrix with singular values, <span style="font-weight: bold;">Q</span> is a matrix of right singular vectors. The naming choice of "singular" probably is similar to that of "eigen", because the expressions of the two decompositions are very similar and probably referring to the essential and unique quality of the matrix.MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0tag:blogger.com,1999:blog-4607292300138979799.post-50638161743140735652009-03-05T21:22:00.001-08:002009-03-09T09:36:07.809-07:00Unfolding Models<table><tbody><tr><td><img style="width: 309px; height: 227px;" src="http://lh5.ggpht.com/__iOg7Mclnjs/SbSp8DagY4I/AAAAAAAAAEw/IDXaIcU1-dU/unfolding.jpg" /></td><td style="vertical-align: text-top;">On the top of a folded handkerchief is the <i style="font-weight: bold;">ideal point</i>, representing the highest degree of preference for a particular individual, i.e., the optimal choice within a given set of items. The closer the item is to the ideal point, the higher the preference is of this individual; thus, the individual prefers choice 1 to choice 2.<br /><br />While different individuals have different ideal points on the handkerchief, <a href="http://preprints.stat.ucla.edu/384/unfolding.pdf"><i><b>unfolding</b></i></a> the handkerchief will give us a 2D diagram showing all ideal points and all the items on a common space.</td></tr></tbody></table><p>Some applications of unfolding models (adapted from <a href="http://www.ticalc.org/cgi-bin/zipview?83/basic/math/mds.zip;User%27s%20Guide.doc">this</a>):</p><p>Applicaton 1: In <span style="font-style: italic;">American Idol</span>, a set of judges rate a set of contestants. Unfolding would display the ideal point of each judge as a point, and each contestant as a point. Three pieces of information will be revealed: (a) Judges with similar ideal points would cluster; (b) Contestants rated similarly would cluster; (c) The closeness between the ideal point of a judge and a contestant indicates how high the judge would rate the contestant.<br /></p><p>Application 2: A set of TV brands (e.g., Panasonic, Sony, ...) were rated on a set of attributes (e.g., price, quality, style, ...). In the matrix, the rows are the brands and the columns are the attributes. Unfolding would display (the ideal point of) each brand as a point and each attribute as a point. Three pieces of information: (a) Similar brands (in terms of ideal points) would cluster; (b) Similar attributes would cluster; (c) Brands rated highly on a particular attribute would appear close to that attribute.</p><p>Application 3: Unfolding can also be used to display relationships that may not be symmetric, such as desire between people, trade-flows between nations, and journal citation frequency. Each journal would appear as both a row and a column. The matrix would contain the citation frequency of the row-journal by the column-journal. Self-citing is excluded. Unfolding would produce a diagram in which each journal would appear as two points: citing others and being cited by others. Clusters would have the obvious interpretation, and the distance between a journal’s two points would reflect the imbalances in its citation.</p><hr /><p>Other variants of unfolding models:</p><ol><li>External unfolding models. Besides the preference data, we also have a pre-existing coordinate matrix of the choice objects.</li><li>Vector model of unfolding. Representing individuals by preference vectors instead of ideal points. Because it is the direction of the vector that matters, the preference vectors are usually scaled to have equal length.<br /></li><li>Weighted unfolding.<br /></li></ol><hr />Some terms and programs:<ol><li>In marketing, unfolding model is known as <a href="http://en.wikipedia.org/wiki/Perceptual_mapping">perceptual mapping</a>.</li><li>In marketing, <a href="http://marketing.byu.edu/htmlpages/pcmds/pcmds.htm">MDPREF</a> ("<span style="font-weight: bold;">M</span>ulti<span style="font-weight: bold;">D</span>imensional <span style="font-weight: bold;">PREF</span>erence") performs internal unfolding analysis, whereas <a href="http://marketing.byu.edu/htmlpages/pcmds/pcmds.htm">PREFMAP</a> ("<span style="font-weight: bold;">PREF</span>erence <span style="font-weight: bold;">MAP</span>ping") performs external unfolding analysis.<br /></li></ol>MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0tag:blogger.com,1999:blog-4607292300138979799.post-70604785351607965362009-03-03T09:36:00.002-08:002009-03-07T23:27:22.173-08:00Procrustes analysisThe purpose of Procrustes analysis is to fit one MDS solution (configuration, map), B, to another one, A, and <span style="font-style: italic;">eliminate superficial differences</span> between B and A, by means of rotating, mirror-reflecting, dilating/magnifying, shrinking, or shifting/moving B, without changing either's shape.<br /><br />Application 1. A is the physical location map, whereas B is the travel-time map produced by MDS. In Procrustes analysis, we fit B to A, which allows us to display B on the top of A and to spot differences.<br /><br />Application 2. Y is easy to interpret, whereas the initial X is not. In Procrustes analysis, we fit X to Y in order to interpret X.<br /><br />Application 3. F is the result from the female participants, whereas M is that from the male participants. In Procrustes analysis, we fit M to F (or F to M) so that we can compare the results from males and females on the same page (provided that the fitting is satisfactory).<br /><br />Application 4. CH is is the result from Chinese participants, whereas AM is that from American participants. In Procrustes analysis, we fit CH to AM (or AM to CH) so that we can compare the cross-cultural results on the same page (provided that the fitting is satisfactory).MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com1tag:blogger.com,1999:blog-4607292300138979799.post-37013111066927589342009-01-29T07:59:00.000-08:002009-03-27T23:26:30.312-07:00MDS and social psychologySearching JPSP by scholar. The 12 results found are categorized as the following:<br /><br /><span style="font-weight: bold;">A. Structure of Emotion</span><br /><br />1. Russell (1980) A circumplex model of affect: 28 emotion-denoting adjectives are reduced to a 2D space: pleasure-displeasure and arousal-sleepiness.<br /><ul><li>In the same year, Russell and Pratt (1980) also talked about the two dimensions on the meaning that persons attribute to environments.<br /></li><li>Russell and Bullock (1985) followed up on Russell (1980) to show that the two dimensions reveal a basic property of the human conception of emotions, rather than represent an artifact that is due to semantic relations learned along with the emotion lexicon.</li><li>Russell, Weiss, and Mendelsohn (1989) followed up to develop a single-item scale, the Affect Grid, to quickly assess affect along the dimensions of pleasure-displeasure and arousal-sleepiness.<br /></li><li><a href="http://www.bc.edu/sites/asi/publications/lfb/feldman1995.pdf">Feldman (1995)</a> interpreted the 2D as valence-focus and arousal-focus and suggested their relation to Positive Affect and Negative Affect.</li><li><a href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1351136">Barrett (2004)</a> followed up on Feldman (1995) to talk about how valence-focus and arousal-focus are related to cognitive structure of emotion language vs. phenomenological experience.</li><li>Extending Russell's model, <a href="http://psych.colorado.edu/%7Ejedi/JEDI/Publications_files/larsen.mcgraw.cacioppo.%282001%29.pdf">Larsen, McGraw, and Cacioppo (2001)</a> argued that people can feel happy and sad at the same time; they do not have to experience positive-negative emotions in a bipolar way.<br /></li></ul><span style="font-weight: bold;">B. Structure of Self-Other Relationship:</span><br /><br />2. Falbo (1977) Multidimensional scaling of power strategies: 16 strategies of "How I Get My Way." reduced to a 2D space: (a) rational/nonrational and (b) direct/indirect.<br /><br />3. <a href="http://www.sfu.ca/psyc/faculty/bartholomew/research/publications/bh1991.pdf">Bartholomew and Horowitz (1991)</a> examined a model of individual differences in adult attachment in which two underlying dimensions, the person's internal model of the self (positive or negative) and the person's internal model of others (positive or negative), were used to define four attachment patterns. (as seen in General Discussion)<br /><span style="font-weight: bold;"></span><br />4. Wiggins, Phillips, and Trapnell (1989) interpersonal circumplex: dominant/submissive and agreeable/cold-hearted.<br /><ul><li>Gurtman (1992) applied this to plot individuals' profiles of high/low trust and high/low Machiavellianism.</li></ul>5. Walker and Hennig (2004) studied the underlying 2D for the three exemplars of morality: just, brave, and caring, and found different 2D for each of them.<br /><br />6. <a href="http://psycnet.apa.org/index.cfm?fa=main.doiLanding&uid=2007-15390-004">Abele and Wojciszke (2007)</a> found that a large number of trait names can be organized into the 2D space of agency and communion.<br /><br />7. <a href="http://www.psych.rochester.edu/SDT/documents/2005_Grouzetetal_StructureofGoalContents.pdf">Grouzet et al. (2005)</a> found that 11 types of goals can be organized into a 2D space of intrinsic (e.g., self-acceptance, affiliation) versus extrinsic (e.g., financial success, image), and self-transcendent (e.g., spirituality) versus physical (e.g., hedonism). This results has cross-cultural validity.MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0tag:blogger.com,1999:blog-4607292300138979799.post-31082201209744284622009-01-28T14:03:00.000-08:002009-04-21T15:21:59.305-07:00(Incomplete) list of MDS researchers<ol><li>Warren S. Torgerson:</li><br /><ul><li><a href="http://www.jhu.edu/~gazette/1999/mar0899/08obit.html">former professor</a> at John Hopkins</li><li>developed MDS while he was a PhD student</li><li>known for the <a href="http://en.wikipedia.org/wiki/Multidimensional_scaling">classical scaling</a> (aka., Torgerson scaling) in MDS</li><li>Solution from Torgerson scaling can be used as initial configuration; however, it is a rational configuration and is prone to local minima</li></ul><br /><li><a href="http://en.wikipedia.org/wiki/Louis_Guttman">Louis E. Guttman</a>:</li><br /><ul><li>former president of the Psychometric Society</li><li>developed Guttman loss function in SYSTAT</li></ul><br /><li>Roger N. Shepard:</li><br /><ul><li>former president of the Psychometric Society<br /></li><li><a href="https://www.stanford.edu/dept/psychology/rshepard">professor</a> of cognitive psychology at Stanford University (Emeritus) </li><li>known for <a href="http://repositories.cdlib.org/cgi/viewcontent.cgi?article=1036&context=uclastat">Shepard diagram</a><br /></li></ul><br /><li><a href="http://en.wikipedia.org/wiki/Joseph_Kruskal">Joseph B. Kruskal</a>:</li><br /><ul><li>former president of the Psychometric Society</li><li>former president of the Classification Society of North America</li><li>developed stress formula 1 and formula 2<br /></li><li>developed the program of KYST (Kruskal, Young, & Seery, 1973)</li></ul><br /><li>Forrest W. Young:</li><br /><ul><li>former president of the Psychometric Society<a href="http://forrest.psych.unc.edu/"></a></li><li><a href="http://forrest.psych.unc.edu/">professor</a> of quantitative psychology at the University of North Carolina at Chapel Hill (Emeritus)</li><li>developer of <a href="http://forrest.psych.unc.edu/research/alscal.html">ALSCAL</a> (alternating least squares scaling) (available in SPSS)<br /></li></ul><br /><li>J. Douglas Carroll:</li><br /><ul><li>former president of the Psychometric Society<a href="http://dcarroll.rutgers.edu/"></a></li><li><a href="http://dcarroll.rutgers.edu/">professor</a> of management and psychology at Rutgers University</li><li>developer of INDSCAL (individual differences scaling)</li></ul><br /><li><a href="http://directory.stat.ucla.edu/~deleeuw">Jan de Leeuw</a>:</li><br /><ul><li>former president of the Psychometric Society</li><li>developer of smacof package in R</li></ul><br /><li><a href="http://www.psych.uiuc.edu/people/showprofile.php?id=41">Lawrence J. Hubert</a>:</li><br /><ul><li>former president of the Psychometric Society</li><li>developer of combinatorial analysis</li><li>developer of dynamic programming</li><li>developer of city-block MDS</li></ul><br /><li>Ingwer Borg and Patrick J. F. Groenen:</li><br /><ul><li>authors of the Bible book of MDS: Borg, I., & Groenen, P.J.F. (2005). <a href="http://people.few.eur.nl/groenen/mmds/"><span style="font-style: italic;">Modern multidimensional scaling</span></a>. 2nd edition. New York: Springer.<br /></li></ul><br /></ol>MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0tag:blogger.com,1999:blog-4607292300138979799.post-18722988652211222302009-01-28T13:59:00.000-08:002009-04-21T15:37:00.328-07:00Softwares<ol><li><a href="http://www.r-project.org/">R</a>:</li><ul><li>Package: <i>stats</i></li><ul><li><a href="http://stat.ethz.ch/R-manual/R-patched/library/stats/html/cmdscale.html">cmdscale()</a>: classical (metric) MDS (an example can be found <a href="http://cran.r-project.org/web/packages/HSAUR/vignettes/Ch_multidimensional_scaling.pdf">here</a>)<br /></li></ul><br /><li>Package: <i>proxy</i></li><ul><li>dist(): distance matrix</li></ul><br /><li>Package: <a href="http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/00Index.html"><i>MASS</i></a> (Mondern Applied Statistics in S)</li><ul><li><a href="http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/isoMDS.html">isoMDS()</a>: Kruskal's non-metric MDS (an example can be found <a href="http://cran.r-project.org/web/packages/HSAUR/vignettes/Ch_multidimensional_scaling.pdf">here</a>)</li><li>Shepard(): for drawing Shepard diagram<br /></li><li><a href="http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/sammon.html">sammon()</a>: Sammon's non-metric MDS (similar to Kruskal's non-metric MDS but independently developed)<br /></li></ul><br /><li>Package: <i>smacof (Scaling by <a href="http://en.wikipedia.org/wiki/Majorization">MAjorizing</a> a COplicated Function; </i>a paper is <a href="http://preprints.stat.ucla.edu/537/smacof.pdf">here</a>)</li><ul><li>smacofSym(): for symmetric dissimilarity matrices</li><li>smacofRect(): for rectangular input matrices, i.e., unfolding<br /></li><li>smacofIndDiff(): individual difference MDS</li><li>smacofSphere.primal(): projection of the resulting con gurations onto spheres</li><li>smacofSphere.dual(): indirect function to solve linear problems, sometimes faster than primal</li><li>sim2diss(): convert similarity matrix to dissimilarity matrix<br /></li></ul><br /><li>Package: <i>labdsv</i> (Laboratory for Dynamic Synthetic Vegephenonenology)<i><br /></i></li><ul><li>nmds(): application of isoMDS()</li></ul><br /><li>Package: <i>vegan</i> (R functions for vegetation ecologists)<i><br /></i></li><ul><li>metaMDS(): an integration of initMDS(), isoMDS(), postMDS(), and wascores()</li><li>procrustes(): for the Procrustes Problem</li><li>wcmdscale(): weighted classical (metric) multidimensional scaling<br /></li></ul><br /><li>Package: <i>rggobi</i></li><ul><li>ggobi(): interactive multidimensional scaling using ggobi and ggvis for display<br /></li></ul><br /><li>Useful Links<i></i></li><ul><li><a href="http://cran.stat.ucla.edu/">Task View of the Comprehensive R Archive Network</a></li><li>Notes on the use of R for psychology experiments and questionnaires: <a href="http://www.psych.upenn.edu/~baron/rpsych/rpsych.html">here</a> via <a href="http://cran.r-project.org/other-docs.html">here</a><br /></li><li><a href="http://www.r-project.org/search.html">R site search</a><br /></li></ul></ul><br /><li>SYSTAT:</li><ul><li>use EM to estimate missing data in nonmetric unfolding model</li><li>power transformation (metric MDS)</li><li>log transformation (metric MDS)</li></ul><br /><li><a href="http://www.ucs.louisiana.edu/~rbh8900/permap.html">PERMAP</a>: a highly entertaining, interactive tool to explore perceptual mapping<br /></li><br /><li>SPSS: proxscal, prefscal, alscal</li><br /><li>MATLAB: mdscale()</li></ol>A more complete list of MDS softwares can be found <a href="http://people.few.eur.nl/groenen/mmds/mds_software/index.html">here</a>.MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0tag:blogger.com,1999:blog-4607292300138979799.post-54181099561616606512009-01-27T00:53:00.000-08:002009-04-21T14:24:29.828-07:00Internal and external analysesTo facilitate the interpretation of the dimensions in the reduced space, we may do internal or external analyses.<br /><br />In internal analysis, we use the same proximities data, run alternative analysis method (e.g., <span class="Apple-style-span" style="font-weight: bold;">cluster analysis</span>) with them, and embed the results within MDS. If different methods all converge to the same interpretation, then it is!<br /><br />In external analysis ("property fitting"), we use supplementary data. Specifically, we may try to predict the property (collected on the objects) for object_i from the 2D coordinates for the objects through multiple regression.<br /><br />For example, in a study, the objects are 14 stressful experiences relevant to early parenting, and the two dimensions are labeled as "major vs. minor child problems" and "child welfare vs. self-welfare". The external property is "infuriating", and we want to predict "infuriating" for each of the 14 objects from the 2D coordinates for the 14 objects, which results in a directed line. It is found that infuriating tends to be associated with the problems of self-ware as opposed to the welfare of the child.<br /><br />In external analysis, we regress a given external attribute of the objects (e.g., "infuriating") on the 2D coordinates of the objects (i.e., dim 1 and dim 2), and the resulting <span style="font-style: italic;">un</span>standardized multiple regression coefficients form a point in the 2D space. A <span style="font-style: italic;">directed</span> line is then drawn from the origin to that point. Evidently, the projections of the objects on this line give a set of 2D coordinates, (dim1, dim2), which correspond best to the external attribute (Borg & Gronen, 2005, pp.77-79).MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0tag:blogger.com,1999:blog-4607292300138979799.post-86295203253747697202009-01-26T20:17:00.001-08:002009-02-03T21:17:51.496-08:00The scaling: Basic conceptsThe goal of scaling is to minimize the dissimilarity of data between the original and the reduced space. Specifically,<br /><br />p_ij is the <span style="font-weight: bold;">proximity</span> (typically, dissimilarity) between object_i and object_j in the original space, whereas <span style="font-size:100%;">d_</span><span style="font-size:85%;">ij</span> is the Euclidean <span style="font-weight: bold;">distance</span> between object_i and object_j in the reduced space<br /><br />We use a linear regression equation to predict d_ij from p_ij, and dhat_ij is the predicted value of d_ij. Then, we want to minimize the difference between d_ij and dhat_ij, using least squares. Here, we have the raw stress index (which we want to minimize):<br /><br /><img style="width: 286px; height: 62px;" src="http://lh6.ggpht.com/__iOg7Mclnjs/SX7Jm4OY8fI/AAAAAAAAAEM/0qmGm7Ovl9w/Clipboard04.jpg" /><br /><br />Because the dimensions in the reduced space can be arbitrarily stretched or contracted, we normalize the raw stress index in order to achieve the following,<br /><br /><img style="width: 128px; height: 64px;" src="http://lh3.ggpht.com/__iOg7Mclnjs/SX7BRExUaPI/AAAAAAAAADg/DLpgBoYoPf4/Clipboard02.jpg" /><br /><br />Also, a square root places the index in the same unit as d_ij, so we have the <span style="font-weight: bold;">normalized stress index</span> (which we want to minimize):<br /><br /><img style="width: 197px; height: 89px;" src="http://lh3.ggpht.com/__iOg7Mclnjs/SX7DVIvUUZI/AAAAAAAAADs/kBI-IzJCfBE/Clipboar33.jpg" /> (Note. this is Kruskal's stress formula 1)<br /><br /><hr /><br />Typically, a <a href="http://www.udel.edu/FREC/eggermont/Courses/Stat603/MonoRegr.pdf">monotone regression</a> (aka., isotonic regression) is used instead of a linear regression, and it leads to minimizing distance ranks and therefore non-metric MDS. If a linear regression is used, it is metric MDS.<br /><br />According to Kruskal and Wish (1978), with non-metric MDS, at least 9 objects are required for a 2D solution, while at least 13 objects are required for a 3D solution.<br /><br /><hr /><br /><a href="http://en.wikipedia.org/wiki/Degeneracy_%28mathematics%29">Degenerate</a> solution:<br /><br />According to <a href="http://www.merriam-webster.com/dictionary/degenerate%5B1%5D"></a>Merriam-Webster dictionary, <a href="http://www.merriam-webster.com/dictionary/degenerate%5B1%5D">degenerate</a> means "<span class="sense_break"><span class="sense_break"><span class="sense_content"> being mathematically simpler (as by having a factor or constant equal to zero) than the typical case".<br /><br />In MDS, a degenerate solution is one with a zero (or very close to zero) stress value but retaining no (or minimal) structural information about the data. For example, the objects cluster into a few (e.g., 2) nodes and the dimensions are uninterpretable.<br /></span></span></span>MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0tag:blogger.com,1999:blog-4607292300138979799.post-44929475561497677062009-01-23T07:12:00.000-08:002009-06-10T21:46:50.767-07:00Why do we need MDS?Initially, researchers want to interpret a set of objects in terms of their relationships. However, the proximities (typically, dissimilarity) among them are in a high-dimensional space, <u>which is beyond human's capacity of comprehension</u>. Being troubled, the researchers think,<br /><br /><span style="font-weight: bold;">Heck! Why don't we try to project the objects into a 2D space and display them on a X-Y plane? As human beings, we are much more familiar with a X-Y plane and such an interpretation will be more exciting!<br /><br /></span>Thus, <a href="http://en.wikipedia.org/wiki/Dimension_reduction">dimension reduction</a> and therefore information loss is involved in MDS, and the general purpose of MDS program is to preserve the proximities between objects in the high-dimensional space as much as possible. An example of MDS in social psychology is that the 11 factors of the Aspiration Index are <a href="http://faculty.knox.edu/tkasser/aspirations.html">visually represented</a> in an 2D plane. (And Don't you like it more when you are familiar with the way of interpreting the results?!)<br /><br />Some notes:<br /><br />1. MDS is a visualization tool. The goal is to <a href="http://www.analytictech.com/borgatti/mds.htm">reduce the observed complexity</a> in the data matrix to lower dimensions (2 or 3) for humans to visualize.<br /><br />2. MDS is a descriptive tool, rather than an inferential tool (<span class="m"><a href="http://preprints.stat.ucla.edu/274/274.pdf">de Leeuw, 2001</a>)</span>. However, a representative sample should be recruited in order to generalize the description to the population.<br /><br />3. MDS is more <a href="http://www.statsoft.com/textbook/stmulsca.html">flexible</a> than factor analysis: (a) it doesn't require that the underlying data are distributed as multivariate normal, and (b) it can be applied to any kind of distances or similarities, rather than just the computed correlation matrix.<br /><br />4. MDS is different from cluster analysis. The goal of MDS is not to group/partition objects, but users can still <a href="http://www.cs.vu.nl/%7Egeurt/Multivar/College5.pdf">visually cluster</a> objects based on MDS.<br /><br />5. MDS is related to <a href="http://blog.peltarion.com/2007/06/13/the-self-organized-gene-part-2/">self-organizing map (SOM)</a> because they both enable visualizing low-dimensional views of high-dimensional data. However, SOM <a href="http://www.blogger.com/www.cs.ualberta.ca/%7Enray1/CMPUT466_551/Clustering/Ch14-Part5.ppt">preserves data neighorhood</a>, wheres MDS does not.<br /><br />6. Besides dimensional representation (more exploratory), another goal of MDS is configural verification (more confirmatory).<br /><br />7. The labeling of a dimension in MDS is arbitrary. The only requirement is that the two ends sum to zero at the center. It is similar to, but not the same as, bipolar, because it doesn't say anything about mutual exclusivity of the two ends in reality.<br /><br />8. The number of dimensions is usually 2 (at best 3). On the one hand, the number should not be just 1; otherwise, all gradient-based methods in one-dimension will typically result in local optima. On the other hand, the number should not exceed 3; otherwise, visualization could be very difficult.<br /><br />9. Another <a href="http://www.ticalc.org/cgi-bin/zipview?83/basic/math/mds.zip;User%27s%20Guide.doc">example</a> of MDS would be to visualize the travel-times between cities. In the matrix, each row and each column would correspond to a city. MDS could then recreate a map containing the cities, solely from the matrix. This map would look similar to the actual map of city locations, but would differ in interesting ways. Cities connected by faster than average transportation passageways would appear closer together, while roadblocks would move cities apart.MDShttp://www.blogger.com/profile/00018838553015900104noreply@blogger.com0