The Nature and Nuance of Music

Philip Ball’s The Music Instinct explores the multifaceted nature of music, examining its scientific underpinnings and its profound impact on human experience. The book investigates how our brains process sound, perceive melody and harmony, and respond emotionally to music across diverse cultures and historical periods. Ball considers the universality of music, the evolution of musical scales and structures, and the ongoing debate about music’s meaning and purpose. Through explorations of acoustics, psychology, neuroscience, and cultural studies, the book seeks to understand why music is so integral to humanity.

The Science and Art of Music

Music is not simply a kind of mathematics but rather a remarkable blend of art and science, logic and emotion, physics and psychology. The study of how music works involves both scientific investigation and an appreciation for its artistic qualities.

Here are some aspects of the relationship between music and science discussed in the sources:

  • The Physics of Sound and Music: Musical notes can be understood in terms of their acoustic frequencies. The relationship between pitch and frequency seems simple, with higher frequencies generally corresponding to higher pitches. However, the selection of discrete notes used in music across cultures is not solely determined by nature. The interaction of nature and culture shapes the diverse palettes of notes found in different musical traditions. Helmholtz combined his knowledge of the ear’s workings with the mathematics of vibration to understand how we hear tones, producing a significant early scientific exposition on music cognition in his 1863 book “On the Sensations of Tone as a Physiological Basis for the Theory of Music”. He also explored the ancient question of consonance, noting the historical preference for intervals with simple frequency ratios.
  • The Neuroscience of Music: When we listen to music, our brains perform complex feats of filtering, ordering, and prediction automatically and unconsciously. Neuroscience seeks to identify which brain regions are used for different musical tasks, providing insights into how the brain classifies and interprets music. For example, pitch perception appears to be mostly localized in the right hemisphere. Pitch intervals and melody are processed in areas like Heschl’s gyrus and the planum temporale. The brain also engages in sophisticated streaming and binding of sound to distinguish different musical elements and create a coherent perception. Musical training can alter the brain, leading to more analytical processing in musicians and changes in the corpus callosum and auditory cortex. However, the precise link between the rich experience of music and brain activity remains a significant challenge for neuroscience. The “Mozart Effect,” which suggested a positive effect of listening to Mozart on general intellect, has been qualified by findings showing that children might respond best to their favorite kind of music, leading to the idea of a “Blur Effect” as well.
  • Music Cognition and Psychology: The science of music cognition is increasingly exploring the universal aspects of music by breaking it down into basic structural elements like pitch, tone, and rhythm. However, emotional, social, and cultural factors also significantly influence music perception. For instance, the perception of melodic pitch steps shows probability distributions that are fairly universal across Western and many other musical traditions. Music psychologists study how we process melodies, which involves learning expectations about pitch steps. They also investigate how we decode sound, including the streaming and binding of different musical voices. The field of music and emotion has become central to music cognition, moving away from purely atomistic dissections of music to examine responses to actual music. Theories like Meyer’s and Narmour’s attempt to explain emotional responses in terms of expectation, tension, and release.
  • Music as Organized Sound: Avant-garde composer Edgar Varèse defined his music as “organized sound,” distinguishing his experimental sonic explorations from conventional music. This definition highlights the role of organization in what we perceive as music, although the listener also actively participates in this organization.
  • Music and Language: Some researchers propose an evolutionary link between music and language, suggesting a common ancestral “musilanguage”. This theory posits that musilanguage might have contained features like lexical tone, combinatorial phrases, and expressive phrasing. Even today, non-vocal music seems to share speech-like patterns, such as pitch contours (prosody). Studies suggest that the rhythmic and melodic patterns of language may have shaped the music of composers from the same linguistic background. While there are neurological dissociations between language and music processing (amusia and aphasia), some theories suggest that syntactic processing in both domains might share neural resources.
  • The Meaning of Music: The question of whether music has inherent meaning is debated. Some believe music is purely formal and does not “say” anything. Others argue that music can convey and elicit emotions , although the precise relationship is complex. Musical affect might arise from underlying principles that can be analyzed rationally. Composers and musicians intuitively manipulate human characteristics to create musical effects.

In conclusion, the study of music is deeply intertwined with various scientific disciplines. Acoustics provides the foundation for understanding musical sound, neuroscience explores the brain’s engagement with music, and music cognition investigates how we perceive and process musical information. While music is undoubtedly an art form, scientific inquiry continues to shed light on the intricate mechanisms underlying our musical experiences.

The Fundamentals of Musical Scales

Musical scales are fundamental to most musical traditions, serving as the set of pitches from which melodies and harmonies are constructed. They represent a selection of discrete pitches from the continuous spectrum of audible frequencies.

Here are key aspects of musical scales discussed in the sources:

  • Definition and Basic Concepts: A musical scale is a set of discrete pitches within the octave that a tradition uses to build its music. Unlike the smoothly varying pitch of a siren, a scale is like a staircase of frequencies. Most musical systems are based on the division of pitch space into octaves, a seemingly fundamental aspect of human pitch perception. Within this octave, different cultures choose a subset of potential notes to form their scales. This selection is not solely determined by nature but arises from an interaction of nature and culture.
  • Western Scales and Their Development:
  • Pythagorean Scales: One of the earliest theoretical frameworks for Western scales is attributed to Pythagoras, though the knowledge was likely older. Pythagorean scales are derived mathematically from the harmonious interval of a perfect fifth, based on the simple frequency ratio of 3:2. By repeatedly stepping up by a perfect fifth from a tonic and folding the resulting notes back into an octave, the major scale can be generated. This scale has an uneven pattern of whole tones and semitones. The Pythagorean system aimed to place music on a solid mathematical footing, suggesting music was a branch of mathematics embedded in nature. However, the cycle of fifths in Pythagorean tuning does not perfectly close, leading to an infinite number of potential notes, which can be problematic if music modulates between many keys.
  • Diatonic Scales: Western music inherited diatonic scales from Greek tradition, characterized by seven tones between each octave. The major and minor scales became the basis of most Western music from the late Renaissance to the early twentieth century. Each note of a diatonic scale has a specific order, with the tonic being the starting and central note.
  • Chromatic Scale: In addition to the seven diatonic notes, there are five other notes within an octave (like the black notes on a piano within a C major scale). The scale that includes all twelve semitones is called the chromatic scale, and music that uses notes outside the diatonic scale is considered chromatic.
  • Modes: Before diatonic scales became dominant, Western music utilized modes, which can be thought of as scales using the same notes but starting in different places, each with a different sequence of step heights. Medieval modes had anchoring notes called the final and often a reciting tone called the tenor. The Ionian and Aeolian modes introduced later are essentially the major and a modern minor scale, respectively.
  • Accidentals, Transposition, and Modulation: Sharps and flats (accidentals) were added to the modal system to preserve pitch steps when transposing melodies to different starting notes (keys). This also enabled modulation, the process of moving smoothly from one key to another, which became central to Western classical music. Transposition and modulation necessitate the introduction of new scales and notes.
  • Non-Western Scales: Musical scales vary significantly across cultures.
  • Javanese Gamelan: Gamelan music uses non-diatonic scales like pélog and sléndro, which have different interval structures compared to Western scales. The sléndro scale is a rare exception with equal pitch steps.
  • Indian Music: The Indian subcontinent has a rich musical tradition with non-diatonic scales that include perfect fifths. North Indian music employs thirty-two different scales (thats) of seven notes per octave, drawn from a palette of twenty-two possible pitches. These scales (ragas) have tunings that can differ significantly from Western scales.
  • Arab-Persian Music: This tradition also uses pitch divisions smaller than a semitone, with estimates ranging from fifteen to twenty-four potential notes within an octave. However, some of these might function as embellishments rather than basic scale tones.
  • The existence of diverse scale systems demonstrates that the selection of notes is not solely dictated by acoustics or mathematics.
  • Number and Distribution of Notes: Most musical systems use melodies constructed from four to twelve distinct notes within an octave. This limitation likely stems from cognitive constraints: too few notes limit melodic complexity, while too many make it difficult for the brain to track and organize the distinctions. The unequal pitch steps found in most scales (with sléndro being an exception) are thought to provide reference points for listeners to perceive the tonal center or key of a piece. Scales with five (pentatonic) or seven (diatonic) notes are particularly widespread, possibly because they allow for simpler interconversion between scales with different tonic notes during modulation.
  • Cognitive Processing of Scales: Our brains possess a mental facility for categorizing pitches, allowing us to perceive melodies as coherent even on slightly mistuned instruments. We learn to assign pitches to a small set of categories based on interval sizes, forming mental “boxes”. To comprehend music, we need to discern a hierarchy of status between the notes of a scale, which depends on our ability to intuit the probabilities of different notes occurring.
  • Alternative Scales: Some twentieth-century composers explored non-standard scales to create unique sounds, such as Debussy’s whole-tone scale, Messiaen’s octatonic scales, and Scriabin’s “mystic” scales.

In essence, musical scales are carefully chosen sets of pitches that provide the foundational elements for musical expression. Their structure and the specific notes they contain vary greatly across historical periods and cultural traditions, reflecting both acoustic principles and human cognitive and cultural preferences.

The Perception of Melody in Music

Melody perception is a complex cognitive process through which we hear a sequence of musical notes as a unified and meaningful whole, often referred to as a “tune”. However, the sources clarify that “melody” is a more versatile term than “tune,” as not all music has a readily identifiable tune like “Singin’ in the Rain”. For instance, Bach’s fugues feature short, overlapping melodic fragments rather than a continuous, extended tune.

Pitch and Pitch Relationships:

The foundation of melody perception lies in our ability to process pitch, which is processed by pitch-selective neurons in the primary auditory cortex. These neurons have a unique one-to-one mapping for pitch, unlike our perception of other senses. While pitch increases with acoustic frequency, our auditory system creates a cyclical perception where pitches an octave apart sound similar, a phenomenon called octave equivalence. This is a unique perceptual experience in music. However, the sources emphasize that simply having the correct pitch classes in different octaves does not guarantee melody recognition. When listeners were presented with familiar tunes where the octave of each note was randomized, they couldn’t even recognize the melody. This suggests that register or ‘height’ (which octave a note is in) is a crucial dimension of melody perception, alongside chroma (the pitch class).

Our brains possess a remarkable mental facility for categorizing pitches, allowing us to perceive melodies as coherent even if played on slightly mistuned instruments. We learn to assign pitches to mental “boxes” representing intervals like “major second” or “major third,” classifying any pitch close enough to that ideal interval size.

Melodic Contour:

The contour of a melody, or how it rises and falls in pitch, is a vital cue for memory and recognition. Even infants as young as five months respond to changes in melodic contour. Interestingly, both children and untrained adults often think melodies with the same contour but slightly altered intervals are identical, highlighting the primacy of contour in initial recognition. Familiar tunes remain recognizable even when the melodic contour is “compressed”. Composers can create repeating contour patterns to help bind a melody together, even if they are not exact repeats, adapting the contour to fit the specific pitch staircase of a scale. Diana Deutsch refers to these building blocks as “pitch alphabets,” which can be compiled from scales and arpeggios.

Tonal Hierarchy and Expectation:

Our perception of melody is deeply influenced by the tonal hierarchy, which is our subjective evaluation of how well different notes “fit” within a musical context or key. Even listeners without extensive musical training have a mental image of this hierarchy and constantly refer to it to form anticipations and judgments about a tune. This is supported by experiments where listeners consistently rated the “rightness” of notes within a set tonal context. The tonal hierarchy helps us organize and understand music, making it sound like music rather than a random sequence of notes. Music that ignores these hierarchies can be harder to process and may sound bewildering.

Gestalt Principles and Binding:

Underlying melody perception is the brain’s constant search for coherence in the auditory stimuli it receives. We mentally and unconsciously “bind” a string of notes into a unified acoustic entity, a tune. This process aligns with principles of gestalt psychology, where the brain seeks to perceive patterns. For example, large intervals can create a discontinuity, challenging the brain’s ability to perceive the melody as a single “gestalt”. Conversely, repetition of notes or contours can strengthen the perception of a unified melody. The auditory picket-fence effect demonstrates our ability to perceive a continuous tone even when interrupted by noise, highlighting the brain’s tendency to “fill in” gaps to maintain a coherent auditory stream. In sequences with large pitch jumps, listeners may even separate the notes into two distinct melodic streams.

Phrasing and Rhythm:

Phrasing, the way a melody is divided into meaningful segments, is crucial for perception. Click migration experiments show that listeners tend to perceive breaks between notes that delineate musical phrases. Phrasing is closely linked to rhythmic patterns, which provide a natural breathing rhythm to music and help us segment it into manageable chunks. The duration and accentuation of notes contribute to our perception of rhythmic groupings.

Memory and Context:

When we listen to a melody, we hear each note in the context of what we have already heard, including previous notes, the melodic contour, repeated phrases, the established key, and even our memories of other music. This constant referencing and updating of information shapes our perception of the unfolding melody.

Brain Processing:

The brain processes melody through various regions, including the lateral part of Heschl’s gyrus and the planum temporale in the temporal lobe, which are involved in pitch perception and sophisticated auditory attributes. The anterior superior temporal gyrus also handles streams of sound like melodies. Research suggests that the right hemisphere discerns the global pattern of pitch contour, while the left hemisphere processes the detailed aspects of pitch steps.

Atonal Music:

Music that rejects tonal hierarchies can be harder to process because it goes against our learned expectations about note probabilities. While some theories attempt to analyze atonal music through concepts like pitch-class sets, these approaches often don’t explain how such music is actually perceived.

In summary, melody perception is a dynamic process involving the processing of pitch and its relationships, the recognition of melodic contour, the influence of tonal hierarchies and learned expectations, the brain’s ability to bind sequences of notes into coherent units, the segmentation of melodies into phrases guided by rhythmic patterns, and the crucial role of memory and context. These elements work together to allow us to experience a series of discrete musical notes as a meaningful and unified melodic line.

Understanding Harmony and Dissonance in Music

Harmony is about fitting notes together. Conventionally, combinations that fit well are called consonant, and those that fit less well are dissonant. In a reductive formulation, consonance is considered good and pleasing, while dissonance is bad and unsettling. However, these concepts are often misunderstood and misrepresented.

Historical Perspectives on Consonance and Dissonance:

  • In tenth-century Europe, a perfect fifth was generally not deemed consonant; only the octave was.
  • When harmonizing in fifths became common, fourths were considered equally consonant, which is different from how they are perceived today.
  • The major third (C-E), part of the “harmonious” major triad, was rarely used even by the early fourteenth century and was not fully accepted as consonant until the High Renaissance.
  • The tritone interval, supposedly dissonant, becomes pleasing and harmonious when part of a dominant seventh chord (e.g., adding a D bass to C-FG).
  • The whole polarizing terminology of consonance and dissonance is a rather unfortunate legacy of music theory.

Sensory (or Tonal) Dissonance:

  • There is a genuinely physiological aspect of dissonance, distinguished from musical convention, called sensory or tonal dissonance.
  • This refers to the rough, rattle-like auditory sensation produced by two tones closely spaced in pitch.
  • It is caused by the beating of acoustic waves when two pure tones with slightly different frequencies are played simultaneously. If the beat rate exceeds about 20 Hz, it is heard as roughness.
  • The width of the dissonant region depends on the absolute frequencies of the two notes. An interval consonant in a high register may be dissonant in a lower register. Therefore, there is no such thing as a tonally dissonant interval independent of register.
  • In the mid-range of the piano, minor thirds generally lie beyond the band of roughness, while even a semitone does not create roughness for high notes. However, in the bass, even a perfect fifth can become dissonant in sensory terms, explaining the “gruffness” of low chords.

Consonance, Dissonance, and Overtones:

  • Tones played by musical instruments are complex, containing several harmonics.
  • Two simultaneously sounded notes offer many possibilities for overtones to clash and produce sensory dissonance if close enough in frequency.
  • Hermann von Helmholtz calculated the total roughness for all overtone combinations, generating a curve of sensory dissonance with dips at various intervals of the chromatic scale. The octave and fifth have particularly deep “consonant” valleys.
  • However, the depths of several “consonant” valleys don’t differ much. The modern dissonance curve shows that most intervals between the major second and major seventh lie within a narrow band of dissonance levels, except for the perfect fifth. Even the tritone appears less dissonant than major or minor thirds according to some measurements.
  • The greatest sensory dissonance is found close to the unison, particularly the minor second, predicted to sound fairly nasty. However, such intervals can be used for interesting timbral effects.
  • The brain is insistent on “binding” overtones into a single perceived pitch. If a harmonic is detuned, the brain tries to find a new fundamental frequency that fits, and only when the detuning is too large does it register the “bad” harmonic as a distinct tone. Percussive instruments often produce inharmonic overtones, resulting in an ambiguous pitch.

Cultural Influences and Learning:

  • Whether we experience note combinations as smooth or grating is not solely a matter of convention, but there is a physiological aspect. However, likes and dislikes for certain combinations probably involve very little that is innate and are mostly products of learning.
  • What is disliked is probably not the dissonances themselves but how they are combined into music.
  • Acculturation can overcome sensory dissonance, as seen in the ganga songs of Bosnia and Herzegovina, where chords of major and minor seconds are considered harmonious.
  • People tend to like best what is most familiar. Western listeners, being accustomed to tonal music, will be acclimatized to octaves, fifths, thirds, etc., and hear less common intervals as more odd.
  • Studies suggest that cultural tradition exerts a stronger influence than inherent qualities in determining the emotional connotations of music, implying that perceptions of consonance and dissonance can also be culturally influenced.

Harmony in Musical Composition:

  • In polyphonic music, harmony fills out the musical landscape. If melody is the path, harmony is the terrain.
  • Harmonization is the process of fitting melodic lines to chords. This is often where music comes alive.
  • Harmonization is generally more sophisticated in classical music, tending to use voice-leading, where accompanying voices have their own impetus and logic, rather than being monolithic chords.
  • Harmonic progressions are sequences of chords. In Western classical music until the mid-nineteenth century, these tended to be formulaic and conservative, involving transitions to closely related chords. Pop and rock music have inherited much of this tradition.
  • Modulation is the alteration of the key itself within a harmonic progression.
  • Music theorists and psychologists have attempted to create a cartography of chords and keys, trying to map out relationships in harmonic space. Carol Krumhansl’s research suggests that the perceived relatedness of keys aligns with the cycle of fifths.

Harmony, Dissonance, and Musical Style/Emotion:

  • Many classical-music traditionalists deny enjoying dissonance, associating it with jarring modern music. However, even composers like Chopin use dissonance extensively.
  • The use of dissonance by modernist composers was seen by some as an affront to music itself. However, champions of atonalism argued that aversion to dissonance is culturally learned.
  • “Dissonant” intervals like major sixths, sevenths, and ninths can create luxuriant sounds in the hands of composers like Debussy and Ravel.
  • Composers may confuse our expectations regarding harmony to introduce tension and emotion.
  • Expectations about harmony are crucial for our emotional response to music. Composers manipulate these expectations through devices like cadences, anticipation notes, and suspensions.
  • Ambiguity in harmony and tonality can also create a powerful effect, with pleasure arising from the resolution of confusion.
  • Different musical genres establish their own harmonic schemas, which they can then use to manipulate tension.

Dissonance in Polyphony:

  • In early medieval polyphony, it was considered better to compromise the melody than to incur dissonance. However, composers increasingly prioritized maintaining good melodies in each voice, even if it led to occasional dissonances.
  • This led to rules governing permissible dissonances in counterpoint. In Palestrina’s counterpoint, dissonances often occur on “passing tones” leading towards a consonance, and strong consonances are achieved at the beginnings and ends of phrases. The main objective is to maintain horizontal coherence of each voice while enforcing vertical integration through judicious use of consonance and controlled dissonance.
  • Streaming of sound can offer a barrier to the perception of dissonance in polyphony. If voices are sufficiently distinct, potentially dissonant intervals may not be registered as jarring. Bach’s fugues, for example, contain striking dissonances that can go unnoticed due to the independence of the voices.
  • Harmony can support the mental juggling act of listening to multiple melodies simultaneously, especially when the melodies are in the same key. Harmonic concordance seems to assist cognition.
  • The composer doesn’t always want polyphonic voices to be clearly defined. In hymn singing, the focus is on creating a sense of unity through harmonies, resulting in a more homophonic texture where voices combine to carry a single melody, as opposed to the elaborate interweaving of voices in Bach’s contrapuntal music.

In conclusion, harmony and dissonance are fundamental aspects of music that involve both acoustic/physiological phenomena and cultural learning and conventions. Their perception and use have evolved throughout music history and continue to be manipulated by composers to create diverse musical experiences and emotional effects.

Understanding Musical Rhythm and Meter

Rhythm and meter are fundamental aspects of music. Rhythm is defined as the actual pattern of note events and their duration, and it tends to be much less regular than meter or tactus. It’s the “felt” quality of the regular subdivision of time on paper. Rhythm can be catchy and move us physically.

Meter, on the other hand, is the regular division of time into instants separated by equal intervals, providing what is colloquially called the ‘beat’. It’s the underlying pulse. The numbers at the start of a stave, the time signature, indicate how many notes of a particular duration should appear in each bar, essentially telling us whether to count the rhythm in groups of two, three, four, or more beats. To create a beat from a regular pulse, some pulses need to be emphasized over others, often by making them louder. Our minds tend to impose such groupings even on identical pulses. The grouping of pulses defines the music’s meter. Western music mostly uses simple meters with recurring groups of two, three, or four pulses, or sometimes six.

The tactus is related to but different from meter; it’s the beat we would clap out while listening to music and may be culture-specific. We tend to tap out a slower pulse to familiar music.

The source emphasizes that not all music possesses rhythm in a discernible way, citing compositions by Ligeti and Xenakis as continuous skeins of sound without a clear pulse, and Stockhausen’s Kontakte as being made of disconnected aural events. Gregorian chant is an example of music that can have regularly spaced notes but lack a true meter. Music for the Chinese fretless zither (qin) has rhythm in terms of note lengths, but these are not arranged against a steady underlying pulse.

However, a quasi-regular pulse pervades most of the world’s music. A rhythm is typically created by elaborating the periodic beats. Subdivisions and stresses superimposed on a steady pulse give us a sense of true rhythm, helping us locate ourselves in time much like the tonal hierarchy helps us in pitch space. This orderly and hierarchical structuring of time is found in the rhythmic systems of many musical traditions.

The source notes that the metre is often portrayed as a regular temporal grid on which the rhythm is arrayed, but the real relationship is more complex. Musicians tend subconsciously to distort the metrical grid to bring out accents and groupings implied by the rhythm. This stretching and shrinking of metrical time helps us perceive both meter and rhythm.

Western European music has traditionally chopped up time by binary branching, with melodies broken into phrases grouped in twos or fours, divided into bars, and beats subdivided into halves and quarters. This binary division is reflected in note durations like semibreve, minim, and crochet. However, some Balkan music uses prime numbers of beats in a bar, suggesting that binary division is not universal. Eastern European song may have constantly changing meter due to the rhythmic structure of its poetry.

Creating a true sense of rhythm and avoiding monotony involves not just stressing some beats but an asymmetry of events, similar to the skipping rather than plodding nature of spoken language. The source discusses rhythmic figures like the iamb, trochee, dactyl, and anapest, which are “atoms” from which we build a sense of rhythm and interpret musical events. Repetition of these units is crucial for that coherence to be felt. Our assignment of rhythmic patterns draws on various information beyond note duration, including melody, phrasing, dynamics, harmony, and timbre.

Composers generally want us to perceive the intended rhythm and use various factors to reinforce it. However, they may also seek to confuse our expectations regarding rhythm to introduce tension and emotion, as it is easy to hear when a beat is disrupted. Examples of this include:

  • Syncopation, which involves shifting emphasis off the beat.
  • Beethoven’s Fifth Symphony starting with a rest on the downbeat.
  • Rhythmic ambiguity created by conflicting rhythmic groupings and meter, as in Beethoven’s Piano Sonata No. 13 and Bernstein’s “America”.
  • Rhythmic elisions and deceptive rhythmic figures in popular music.
  • Unambiguous disruption of meter, creating a jolt, as in Stravinsky’s The Rite of Spring.
  • The use of anticipation tones in classical cadences to modulate the expectation of the impending cadence.

The source also points out that our sense of metrical regularity isn’t always strong, especially without musical training, and folk music traditions can exhibit irregular meters. In early polyphonic music, complex crossed rhythms were common, even without explicit metrical notation. Some musical traditions, like African, Indian, and Indonesian music, use cross-rhythms and polyrhythms. The minimalist compositions of Steve Reich utilize phasing, where repetitive riffs played at slightly different tempos create shifting rhythmic patterns.

Ultimately, rhythm provides a way to interpret and make sense of the stream of musical events by apportioning them into coherent temporal units. Composers manipulate rhythm and meter in various ways to create structure, expectation, and emotional impact in their music.

By Amjad Izhar
Contact: amjad.izhar@gmail.com
https://amjadizhar.blog


Discover more from Amjad Izhar Blog

Subscribe to get the latest posts sent to your email.

Comments

Leave a comment