Novel Audio Features for Music

4 Abstract— This work advances the music emotion recognition state-of-the-art by proposing novel emotionally-relevant audio features. 5 We reviewed the existing audio features implemented in well-known frameworks and their relationships with the eight commonly 6 deﬁned musical concepts. This knowledge helped uncover musical concepts lacking computational extractors, to which we propose 7 algorithms - namely related with musical texture and expressive techniques. To evaluate our work, we created a public dataset of 900 8 audio clips, with subjective annotations following Russell’s emotion quadrants. The existent audio features (baseline) and the proposed 9 features (novel) were tested using 20 repetitions of 10-fold cross-validation. Adding the proposed features improved the F1-score to 10 76.4 percent (by 9 percent), when compared to a similar number of baseline-only features. Moreover, analysing the features relevance 11 and results uncovered interesting relations, namely the weight of speciﬁc features and musical concepts to each emotion quadrant, and 12 warrant promising new directions for future research in the ﬁeld of music emotion recognition, interactive media, and novel music 13 interfaces. 14

research results show a glass ceiling in MER system performances [7].
Several factors contribute to this glass ceiling of MER systems.To begin with, our perception of emotion is inherently subjective: different people may perceive different, even opposite, emotions when listening to the same song.Even when there is an agreement between listeners, there is often ambiguity in the terms used regarding emotion description and classification [10].It is not well-understood how and why some musical elements elicit specific emotional responses in listeners [10].
Second, creating robust algorithms to accurately capture these music-emotion relations is a complex problem, involving, among others, tasks such as tempo and melody estimation, which still have much room for improvement.
Third, as opposed to other information retrieval problems, there are no public, widely accepted and adequately validated, benchmarks to compare works.Typically, researchers use private datasets (e.g., [11]) or provide only audio features (e.g., [12]).Even though the MIREX AMC task has contributed with one dataset to alleviate this problem, several major issues have been identified in the literature.Namely, the defined taxonomy lacks support from music psychology and some of the clusters show semantic and acoustic overlap [2].
Finally, and most importantly, many of the audio features applied in MER were created for other audio recognition applications and often lack emotional relevance.
Hence, our main working hypothesis is that, to further advance the audio MER field, research needs to focus on what we believe is its main, crucial, and current problem: to capture the emotional content conveyed in music through better designed audio features.This raises the core question we aim to tackle in this paper: which features are important to capture the emotional content in a song?Our efforts to answering this transmitted, representing the emotion that the performer or composer aimed to convey.As mentioned, we focus this work on perceived emotion.
Regarding the relations between emotions and specific musical attributes, several studies uncovered interesting associations.As an example: major modes are frequently related to emotional states such as happiness or solemnity, whereas minor modes are often associated with sadness or anger [20]; simple, consonant, harmonies are usually happy, pleasant or relaxed.On the contrary, complex, dissonant, harmonies relate to emotions such as excitement, tension or sadness, as they create instability in a musical motion [21].
Despite the identification of these relations, many of them are not fully understood, still requiring further musicological and psychological studies, while others are difficult to extract from audio signals.Nevertheless, several computational audio features have been proposed over the years.While the number of existent audio features is high, many were developed to solve other problems (e.g., Melfrequency cepstral coefficients (MFCCs) for speech recognition) and may not be directly relevant to MER.
Nowadays, most proposed audio features are implemented and available in audio frameworks.In Table 2, we summarize several of the current state-of-the-art (hereafter termed standard) audio features, available in widely adopted frameworks, namely, the MIR Toolbox [24], Marsyas [25] and PsySound3 [26].
Musical attributes are usually organized into four to eight different categories (depending on the author, e.g., [27], [28]), each representing a core concept.Here, we follow an eight categories organization, employing rhythm, dynamics, expressive techniques, melody, harmony, tone colour (related to timbre), musical texture and musical form.Through this organization, we are able to better understand: i) where features related to emotion belong; ii) and which categories may lack computational models to extract musical features relevant to emotion.
One of the conclusions obtained is that the majority of available features are related with tone colour (63.7 percent).Also, many of these features are abstract and very low-level, capturing statistics about the waveform signal or the spectrum.
These are not directly related with the higher-level musical concepts described earlier.As an example, MFCCs belong to tone colour but do not give explicit information about the source or material of the sound.Nonetheless, they can implicitly help to distinguish these.This is an example of the mentioned semantic gap, where high level concepts are not being captured explicitly with the existent low level features.This agrees with the conclusions presented in [8], [9], where, among other things, the influence of the existent audio features to MER was assessed.Results of previous experiments showed that "the used spectral features outperformed those based on rhythm, dynamics, and, to a lesser extent, harmony" [9].This supports the idea that more adequate audio features related to some musical concepts are lacking.In addition, the number of implemented  To conclude, the majority of current computational MER 191 works (e.g., [3], [10], [16]) share common limitations such as  [12]; ii) MoodSwings, which has a 203 limited number of samples [29]; iii) Emotify, which is 204 focused on induced rather than perceived emotions [30]; iv) 205 MIREX, which employs unsupported taxonomies and con-206 tains overlaps between clusters [31]; v) DEAM, which is size-207 able but shows low agreement between annotators, as well 208 as issues such as noisy clips (e.g., claps, speak, silences) or 209 clear variations in emotion in supposedly static excerpts [32]; 210 vi) or existent datasets, which still require manual verifica-211 tion of the gathered annotations or clips quality, such as [6].Regarding emotion taxonomies, several distinct models have been proposed over the years, divided into two major groups: categorical and dimensional.It is often argued that dimensional paradigms lead to lower ambiguity, since instead of having a discrete set of emotion adjectives, emotions are regarded as a continuum [10].A widely accepted dimensional model in MER is James Russell's [13] circumplex model.There, Russell affirms that each emotional state sprouts from two independent neurophysiologic systems.
The AllMusic API 2 served as the source of musical information, providing metadata such as artist, title, genre and emotion information, as well as 30-second audio clips for most songs.The steps for the construction of the dataset are described in the following paragraphs.
Step 1: AllMusic API querying.First, we queried the API for the top songs for each of the 289 distinct emotion tags in it.This resulted in 370611 song entries, of which 89 percent had an associated audio sample and 98 percent had genre tags, with 28646 distinct artist tags present.These 289 emotion tags used by AllMusic are not part of any known supported taxonomy, still are said to be "created and assigned to music works by professional editors" [33].
Step 2: Mapping of AllMusic tags into quadrants.Next, we use the Warriner's adjectives list [34] to map the 289 All-Music tags into Russell's AV quadrants.Warriner's list contains 13915 English words with affective ratings in terms of arousal, valence and dominance (AVD).It is an improvement over previous studies (e.g., ANEW adjectives list [35]), with a better documented annotation process and a more comprehensive list of words.Intersecting Warriner and AllMusic tags results in 200 common words, where a higher number have positive valence (Q1: 49, Q2: 35, Q3: 33, Q4: 75).
Step 3: Processing and filtering.Then, the set of related metadata, audio clips and emotion tags with AVD values was processed and filtered.As abovementioned, in  Marsyas and PsySound offer a large number of computational audio features.In this work, we extract a total of 1702 features from those three frameworks.This high amount of features is also because several statistical measures were computed for time series data.
Afterwards, a feature reduction stage was carried to discard redundant features obtained by similar algorithms across the selected audio frameworks.This process consisted in the removal of features with correlation higher than 0.9, where features with lower weight were discarded, according to the ReliefF [36] feature selection algorithm.
Moreover, features with zero standard deviation were also removed.As a result, the number of baseline features was reduced to 898.A similar feature reduction process was carried out with the novel features presented in the following subsection.
These standard audio features serve to build baseline models against which new approaches, employing the novel audio features proposed in the next section, can be benchmarked.The illustrated number of novel features is described as follows.

Novel Audio Features
Many of the standard audio features are low-level, extracted directly from the audio waveform or the spectrum.However, we naturally rely on clues like melodic lines, notes, intervals and scores to assess higher-level musical concepts such as harmony, melody, articulation or texture.The explicit determination of musical notes, frequency and intensity contours are important mechanisms to capture such information and, therefore, we describe this preliminary step before presenting actual features, as follows.

From the Audio Signal to MIDI Notes
Going from audio waveform to music score is still an unsolved problem, and automatic music transcription algorithms are still imperfect [37].Still, we believe that estimating things such as predominant melody lines, even if imperfect, give us relevant information that is currently unused in MER.
To this end, we built on previous works by Salomon et al. [38] and Dressler [39] to estimate predominant fundamental frequencies (f0) and saliences.Typically, the process starts by identifying which frequencies are present in the signal at each point in time (sinusoid extraction).Here, 46.44 msec (1024 samples) frames with 5.8 msec (128 samples) hopsize (hereafter denoted hop) were selected.
Next, harmonic summation is used to estimate the pitches in these instants and how salient they are (obtaining a pitch salience function).Given this, the series of consecutive pitches which are continuous in frequency are used to form pitch contours.These represent notes or phrases.
Finally, a set of computations is used to select the f0s that are part of the predominant melody [38].The resulting pitch trajectories are then segmented into individual MIDI notes following the work by Paiva et al. [40].
Each of the N obtained notes, hereafter denoted as note i , is characterized by: the respective sequence of f0s (a total of L i frames), f0 j;i ; j ¼ 1; 2; . . .L i ; the corresponding MIDI note numbers (for each f0), midi j;i ; the overall MIDI note value (for the entire note), MIDI i ; the sequence of pitch saliences, sal j;i ; the note duration, nd i (sec); starting time, st i 3. http://mir.dei.uc.pt/resources/MER_audio_taffc_dataset.zip

I E E E P r o o f
(sec); and ending time, et i (sec).This information is exploited to model higher level concepts such as vibrato, glissando, articulations and others, as follows.
In addition to the predominant melody, music is composed of several melodic lines produced by distinct sources.
Although less reliable, there are works approaching multiple (also known as polyphonic) F0 contours estimation from these constituent sources.We use Dressler's multi-F0 approach [39] to obtain a framewise sequence of fundamental frequencies estimates.

Melodic Features
Melody is a key concept in music, defined as the horizontal succession of pitches.This set of features consists in metrics obtained from the notes of the melodic trajectory.For instance, for soprano, it comes (1) 4 :

MIDI
Register Distribution per Second.In addition to the previous class of features, these are computed as the ratio of the sum of the duration of notes with a specific pitch range (e.g., soprano) to the total duration of all notes.The same 6 pitch range classes are used.in (5).There, ND i denotes the qualitative duration of note i .regarding the TLNR metric, a note is considered longer than the previous if there is a difference of more than 10 percent in length (with a minimum of 20 msec), as in (6).Similar calculations apply to the TSNR and TELNR features.

Musical Texture Features
To the best of our knowledge, musical texture is the musical concept with less directly related audio features available (Section 3).However, some studies have demonstrated that it can influence emotion in music either directly or by interacting with other concepts such as tempo and mode [42].We propose features related with the music layers of a song.
Here, we use the sequence of multiple frequency estimates to measure the number of simultaneous layers in each frame of the entire audio signal, as described in Section 3.4.1.
Musical Layers (ML) statistics.As abovementioned, a number of multiple F0s are estimated from each frame of the song clip.Here, we define the number of layers in a frame as the number of obtained multiple F0s in that frame.Then, we compute the 6 usual statistics regarding the distribution of musical layers across frames, i.e., MLmean, MLstd, etc.

Musical Layers Distribution (MLD).
Here, the number of f0 estimates in a given frame is divided into four classes: i) no layers; ii) a single layer; iii) two simultaneous layers; iv) and three or more layers.The percentage of frames in each of these four classes is computed, measuring, as an example, the percentage of song identified as having a single layer (MLD1).Similarly, we compute MLD0, MLD2 and MLD3.
Ratio of Musical Layers Transitions (RMLT).These features capture information about the changes from a specific musical layer sequence to another (e.g., ML1 to ML2).To this end, we use the number of different fundamental frequencies (f0s) in each frame, identifying consecutive frames with distinct values as transitions and normalizing the total value by the length of the audio segment (in secs).Moreover, we also compute the length in seconds of the longest segment for each musical layer.

Expressivity Features
Few of the standard audio features studied are primarily related with expressive techniques in music.However, common characteristics such as vibrato, tremolo and articulation methods are commonly used in music, with some works linking them to emotions [43]- [45].
Articulation Features.Articulation is a technique affecting the transition or continuity between notes or sounds.To compute articulation features, we start by detecting legato (i.e., connected notes played "smoothly") and staccato (i.e., short and detached notes), as described in Algorithm 1.
Using this, we classify all the transitions between notes in the song clip and, from them, extract several metrics such as: ratio of staccato, legato and other transitions, longest sequence of each articulation type, etc.
In Algorithm Glissando Coverage (GC).For glissando coverage, we compute the global coverage, based on gc i , using (9).
Glissando Direction (GDIR).This feature indicates the global direction of the glissandos in a song, (10): Glissando to Non-Glissando Ratio (GNGR).This feature is defined as the ratio of the notes containing glissando to the total number of notes, as in (11): Vibrato and Tremolo Features.Vibrato is an expressive technique used in vocal and instrumental music that consists in a regular oscillation of pitch.Its main characteristics are the amount of pitch variation (extent) and the velocity (rate) of this pitch variation.It varies according to different music styles and emotional expression [44].
Hence, we extract several vibrato features, such as vibrato presence, rate, coverage and extent.To this end, we

I E E E P r o o f
apply a vibrato detection algorithm adapted from [46], as follows: Algorithm 3. Vibrato Detection.
1.2.Look for a prominent peak, pp w;i , in each analysis window, in the expected range for vibrato.In this work, we employ the typical range for vibrato in the human voice, i.e., [5], [8] Hz [46].If a peak is detected, the corresponding window contains vibrato.Then, we define the following features.

Vibrato Presence (VP). A song clip contains vibrato if any
of its notes have vibrato, similarly to (8).
Vibrato Rate (VR) statistics.Based on the vibrato rate of each note, vr i (see Algorithm 3), we compute 6 statistics: VRmean, i.e., the weighted mean of the vibrato rate of each note, etc.
As with VR, we also compute the same 6 statistics for vibrato extent, based on ve i and vibrato duration, based on vd i (see Algorithm 3).

Vibrato Coverage (VC).
Here, we compute the global coverage, based on vc i , in a similar way to (9).
High-Frequency Vibrato Coverage (HFVC).This feature measures vibrato coverage restricted to notes over note C4 (261.6 Hz).This is the lower limit of the soprano's vocal range [41].
Vibrato to Non-Vibrato Ratio (VNVR).This feature is defined as the ratio of the notes containing vibrato to the total number of notes, similarly to (11).
Vibrato Notes Base Frequency (VNBF) statistics.As with the VR features, we compute the same 6 statistics for the base frequency (in cents) of all notes containing vibrato.Another approach, previously used in other contexts was 800 also tested: a voice analysis toolkit.
801 Some researchers have studied emotion in speaking and 802 singing voice [47] and even studied the related acoustic fea-803 tures [48].In fact, "using singing voices alone may be effec-804 tive for separating the "calm" from the "sad" emotion, but 805 this effectiveness is lost when the voices are mixed with 806 accompanying music" and "source separation can effec-807 tively improve the performance" [9].songs with low arousal.In addition, some songs from these quadrants appear to share musical characteristics, which are related to contrasting emotional elements (e.g., a happy accompaniment or melody and a sad voice or lyric).This concurs with the conclusions presented in [54].
For the same number of features (100), the experiment using novel features shows an improvement of 9 percent in F1-Score when compared to the one using only the baseline features.This increment is noticeable in all four quadrants, ranging from 5.7 percent in quadrant 2, where the baseline classifier performance was already high, to a maximum increment of 11.6 percent in quadrant 3, which was the least performing using only baseline features.Overall, the novel features improved the classification generally, with a greater influence in songs from Q3.
Regarding the misclassified songs, analyzing the confusion matrix (see Table 5, averaged for the 20 repetitions of 10fold cross validation) shows that the classifier is slightly biased towards positive valence, predicting more frequently songs from quadrants 1 and 4 (466.3,especially Q1 with 246.35) than from 2 and 3 (433.7).Moreover, a significant number of songs were wrongly classified between quadrants 3 and 4, which may be related with the ambiguity described previously [54].Based on this, further MER research needs to tackle valence in low arousal songs, either by using new features to capture musical concepts currently ignored or by combining other sources of information such as lyrics.

Feature Analysis
Fig. 2 presents the total number of standard and novel audio features extracted, organized by musical concept.As discussed, most are tonal features, for the reasons pointed out previously.
As abovementioned, the best result (76.4 percent, Table 3) was obtained with 29 novel and 71 baseline features, which demonstrates the relevance of the novel features to MER.

I E E E P r o o f
Moreover, the importance of each audio feature was measured using ReliefF.Some of the novel features proposed in this work appear consistently in the top 10 features for each problem and many others are in the first 100, demonstrating their relevance to MER.There are also features that, while alone may have a lower weight, are important to specific problems when combined with others.
In this section we discuss the best features to discriminate each specific quadrant from the others, according to specific feature rankings (e.g., ranking of features to separate Q1 songs from non-Q1 songs).The top 5 features to discriminate each quadrant are presented in Table 6.
Except for quadrant 1, the top5 features for each quadrant contain a majority of tone color features, which are overrepresented in comparison to the remaining.It is also relevant to highlight the higher weight given by ReliefF to the top5 features of both Q2 and Q4.This difference in weights explains why less features are needed to obtain 95 percent of the maximum score for both quadrants, when compared to Q1 and Q3.
Musical texture information, namely the number of musical layers and the transitions between different texture types (two of which were extracted from voice only signals) were also very relevant for quadrant 1, together with several rhythmic features.However, the ReliefF weight of these features to Q1 is lower when compared with the top features of other quadrants.Happy songs are usually energetic, associated with a "catchy" rhythm and high energy.The higher number of rhythmic features used, together with texture and tone color (mostly energy metrics) support this idea.
Interestingly, creaky voice detection extracted directly from voice is also highlighted (it ranked 15 th ), which has previously been associated with emotion [50].
The best features to discriminate Q2 are related with tone color, such as: roughness -capturing the dissonance in the song; rolloff and MFCC -measuring the amount of high frequency and total energy in the signal; and spectral flatness measure -indicating how noise-like the sound is.
Other important features are tonal dissonance (dynamics) and expressive techniques such as vibrato.Empirically, it makes sense that characteristics like sensory dissonance, high energy, and complexity are correlated to tense, aggressive 975 music.Moreover, research supports the association of vibrato 976 and negative energetic emotions such as anger [47].
192 low to average results, especially regarding valence, due to 193 the aforesaid lack of relevant features; lack of uniformity in 194 the selected taxonomies and datasets, which makes it 195 impossible to compare different approaches; and the usage 196 of private datasets, unavailable to other researchers for 197 benchmarking.Additional publicly available datasets exist, 198 most suffering from the same previously described prob-199 lems, such as: i) Million Song Dataset, which covers a high 200 number of songs but providing only features, metadata and 201 uncontrolled annotations (e.g., based on social media infor-202 mation such as Last.FM)

1 )
we introduce the proposed novel audio fea-214 tures and describe the emotion classification experiments 215 carried out.To assess this, and given the mentioned limita-216 tions of available datasets, we started by building a newer 217 dataset that suits our purposes.2183.1 Dataset Acquisition 219 The currently available datasets have several issues, as dis-220 cussed in Section 2. To avoid these pitfalls, the following 221 objectives were pursued to build ours: 222 Use a simple taxonomy, supported by psychological 223 studies.In fact, current MER research is still unable to properly solve simpler problems with high accuracy.Thus, in our opinion, there are few advantages to currently tackle problems with higher granularity, where a high number of emotion categories or continuous values are used; 2) Perform semi-automatic construction, reducing the resources needed to build a sizeable dataset; 3) Obtain a medium-high size dataset, containing hundreds of songs; 4) Create a public dataset prepared to further research works, thus providing emotion quadrants as well as genre, artists or emotion tags for multi-label classification;
Note Number (MNN) statistics.Based on the MIDI note number of each note, MIDI i (see Section 3.4.1),we compute 6 statistics: MIDImean, i.e., the average MIDI note number of all notes, MIDIstd (standard), MIDIskew (skewness), MIDIkurt (kurtosis), MIDImax (maximum) and MIDImin (minimum).Note Space Length (NSL) and Chroma NSL (CNSL).We also extract the total number of unique MIDI note values, NSL, used in the entire clip, based on MIDI i .In addition, a similar metric, chroma NSL, CNSL, is computed, this time mapping all MIDI note numbers to a single octave (result 1 to 12).Register Distribution.This class of features indicates how the notes of the predominant melody are distributed across different pitch ranges.Each instrument and voice type has different ranges, which in many cases overlap.In our implementation, 6 classes were selected, based on the vocal categories and ranges for non-classical singers [41].The resulting metrics are the percentage of MIDI note values in the melody, MIDI i , that are in each of the following registers: Soprano (C4-C6), Mezzo-soprano (A3-A5), Contralto (F3-E5), Tenor (B2-A4), Baritone (G2-F4) and Bass (E2-E4).

790 3 . 4 . 7
As for tremolo, this is a trembling effect, somewhat simi-791 lar to vibrato but regarding change of amplitude.A similar 792 approach is used to calculate tremolo features.Here, the 793 sequence of pitch saliences of each note is used instead of 794 the f0 sequence, since tremolo represents a variation in 795 intensity or amplitude of the note.Given the lack of scien-796 tific supported data regarding tremolo, we used the same 797 range employed in vibrato (i.e., 5-8Hz).798Voice Analysis Toolbox (VAT) Features 799 808 Hence, besides extracting features from the original 809 audio signal, we also extracted the same features from the 810 signal containing only the separated voice.To this end, we 811 applied the singing voice separation approach proposed by 812 Fan et al. [49] (although separating the singing voice from 813 accompaniment in an audio signal is still an open problem).814 Moreover, we used the Voice Analysis Toolkit 5 , a "set of 815 Matlab code for carrying out glottal source and voice qual-816 ity analysis" to extract features directly from the audio sig-817 nal.The selected features are related with voiced and 818 unvoiced sections and the detection of creaky voice -"a 819 phonation type involving a low frequency and often highly 820 irregular vocal fold vibration, [which] has the potential [. ..] number of features, ReliefF feature selection 824 algorithms [36] were used to select the better suited ones for 825 each classification problem.The output of the ReliefF algo-826 rithm is a weight between À1 and 1 for each attribute, with 827 more positive weights indicating more predictive attributes.

977
In addition to the tone color features related with the 978 spectrum, the best 20 features for quadrant 3 also include 979 the number of musical layers (texture), spectral dissonance, 980 inharmonicity (harmony), and expressive techniques such 981 as tremolo.Moreover, nine features used to obtain the maxi-982 mum score are extracted directly from the voice-only signal.983 Of these, four are related with intensity and loudness varia-984 tions (crescendos, decrescendos); two with melody (vocal 985 ranges used); and three with expressive techniques such as 986 vibratos and tremolo.Empirically, the characteristics of the 987 singing voice seem to be a key aspect influencing emotion 988 in songs from quadrants 3 and 4, where negative emotions 989 (e.g., sad, depressed) usually have not so smooth voices, 990 with variations in loudness (dynamics), tremolos, vibratos 991 and other techniques that confer a degree of sadness [the employed features were related with 994 tone color, where features capturing vibrato, texture and 995 dynamics and harmony were also relevant, namely spectral 996 metrics, the number of musical layers and its variations, 997 measures of the spectral flatness (noise-like).More features 998 are needed to better discriminate Q3 from Q4, which musi-999 cally share some common characteristics such as lower 1000 tempo, less musical layers and energy, use of glissandos 1001 and other expressive techniques.1002 A visual representation of the best 30 features to distin-1003 guish each quadrant, grouped by categories, is represented 1004 in Fig. 3.As previously discussed, a higher number of tone

Fig. 3 .
Fig. 3. Best 30 features to discriminate each quadrant, organized by musical concept.Novel (O) are extracted from the original audio signal, while Novel (V) are extracted from the voice-separated signal.

TABLE 1
[9] E E P r o o f175 audio features is highly unproportional, with nearly 60 per-176 cent in the cited article belonging to timbre (spectral)[9].177Infact, very few features are mainly related with expres-178 sive techniques, musical texture (which has none) or musical form.Thus, there is a need for audio features estimating higher-level concepts, e.g., expressive techniques and ornamentations like vibratos, tremolos or staccatos (articulation), texture information such as the number of

TABLE 2
Summary of Standard Audio Features PANDA ET AL.: NOVEL AUDIO FEATURES FOR MUSIC EMOTION RECOGNITION experiments with both existent and novel features.
For each pair of consecutive notes, note i and note iþ1 :As with GE, we also compute the same 6 statistics for glissando duration, based on gd i and slope, based on gs i (see Algorithm 2).

TABLE 3
Results of the Classification by Quadrants

TABLE 5 Confusion
Matrix Using the Best Performing Model.

TABLE 6 Top 5
Features for Each Quadrant Discrimination