One thing I learned after buying some gear at home to try to record electric guitar at low volume is how important the physics of the speakers are. You can plug a tube amp into a cabinet simulator and you'll lose a lot more than using solid state electronics on a good but not great Fender amp, especially if you use fuzz / distortion pedals.
I'm not sure Hendrix was a systems engineer, but he was a transcendent blues artist, that's for sure.
One could make a bunch of random noises with a guitar that are hard to reproduce but the music could be shit.
Since the 1980s, we have had the "Sustainiac": an active circuit installed in the electric guitar along with a "reverse pickup" which is energized in order to excite vibration in the strings.
With this device, at the flip of a switch, you get indefinite sustain on any note on the neck, at any volume, distortion or not --- even if the electric guitar is not plugged into an amplifier at all, and just heard acoustically.
The best implementations of this have a three way harmonic switch. You can choose between excite the fretted (or open) note itself (fundamenta a.k.a first harmonic), an octave above it (second harmonic) or a higher harmonic still.
You can be sustaning the given note, and then at the flip of a switch, it will fade over to the higher harmonic.
YouTube videos of this in action are worth checking out.
Here is one:
Nigel Tufnel: The sustain, listen to it. Marty DiBergi: I don't hear anything. Nigel Tufnel: Well you would though, if it were playing.
I've also managed to make an E-Bow work with a steel-string acoustic guitar (but only on one string IIRC).
https://au.fender.com/products/fender-eob-sustainer-stratoca...
You might enjoy this video. He really goes deep into using the guitar to create textures and emotions. He talks about the Edge (U2) and his Infinite Guitar and that he actually calling Michael Brook to see if he could get one. Eventually Fender did a custom build on his Clapton Strat which became the Fender EOB Sustainer.
Hendrix was a working musician who paid his dues on the chitlin' circuit with artists like The Isley Brothers, Little Richard, Ike & Tina Turner, and Sam Cooke before making it on his own. AFAIK those are pretty high-pressure assignments, and count as real work...
One could argue this lack of devotion predates even the smartphone. Heck, I remember getting a Nintendo Entertainment System in the late '80s and then not going out biking or playing basketball as a result.
All other electronic instruments, with the one exception being the Theramin, have a fundamental problem with human expression. There is an unsolvable disconnect between what the performer's actions and their audience.
See: https://www.scribd.com/document/55134776/48787070-Bob-Ostert...
With an electric guitar you get the physicality and dynamism of an acoustic instrument with the complex timbres and extended technique possibilities of an electric/electronic instrument.
There are complex and musically significant feedback loops occurring across many dimensions that lead to extremely complex transformations of timbre via both traditional music theoretical techniques and the physics of a tube amplifier combined with an inductive load (the guitar pickup).
Its really crazy how much more dynamic and complex this can be then even a highly sophisticated modular synthesizer or whatever. Even the way you over load the power supply in a tube amplifier can be manipulated on the fly to enhance and transform timbre.
Then on top of all that it is so incredibly physical that a performer like Jimi Hendrix can manipulate these systems and have the audience intuitively understand what he is doing. Never in a million years would THAT be possible with any other electronic instrument.
There are always some people who get extremely defensive whenever I say that techno didn't click for me until I heard this kind of "techlow" music. Specifically about the part where I think that the reason is also a human expression problem, because of limitations imposed by the electronic media used.
EDIT: having said that, I don't think I would agree with your premise, because it is colored by a subtle form of survivor bias. None of us remember what it's like to not know electronic guitars or what they sound like, so claiming "the audience intuitively understands what Jimmy Hendrix is doing" is like saying everyone "intuitively understands" their native language. On top of that there's nothing about the workings of an electronic guitar that wouldn't in principle work for something like an electronic violin or whatever.
[0] https://www.youtube.com/watch?v=-0gED3rn2Tc
They do a great job with changing their timbre and tones but often ignore a bunch of other factors that make music interesting. Whether that is the rarity of time signatures other than 4/4, the way certain rhythms are locked into certain genres, the choices of keys used, the limited or missing chords, etc.. at some point you start hearing two electronic songs that sound totally different at a superficial level and you realize they're incredibly derivative of each other.
I guess the part people don't like hearing is the implication techno is somehow not expressive. I'm not sure that it lacks expressiveness, but it is certainly more "controlled" than traditional music. When I first heard techno as a teenager in the 90s, my mind was blown. I remember exactly where I was the first time I heard Underworld [1], Photek [2], and Autechre [3]. I think I was attracted to these sounds _because_ they were so different. I think it's hard for electronic music fans like myself to accept the idea that it isn't expressive _because_ it is so different. Isn't it just a different kind of expression?
Still, people like what they like. I'm glad you found a version of dance music that works for you. I've long since moved on being judgmental about people's musical tastes. I think it's just wonderful that music exists at all!
[1] https://www.youtube.com/watch?v=Q5GjVvlmg3o [2] https://www.youtube.com/watch?v=-Xl1xzSRaV0 [3] https://www.youtube.com/watch?v=g6zT3kVtpHc
I think of it more like a painter's palette: every instrument and tool involved in creating music has a different set of colors to choose from, and can also filter some "colors" out if we think of things like audio processing filters.
The tools and techniques typically used to produce techno filter out "colors" that feel essential to me to connect with a song, and yeah, that "controlled" aspect of it is probably a large part of that. That doesn't mean it's not expressive, it's just expressive in a way that I struggle to connect with.
EDIT: funny enough I actually have protanomaly, so my choice of analogy is slightly ironic there. Some visual art and design out there objectively looks terrible from my subjective experience, since the colors look completely off. But that doesn't mean I'm saying the art is objectively bad.
Also, I haven't checked what Juno Reactor do these days, but their old work is phantastic. My fav show of them is Juno Reactor – Shango Tour 2001 Tokyo [2].
For electric violin, I love Ed Alleyne-Johnson [3]. Never seen him live (I'm not from UK) but I own a couple of his earlier works. It reminds me of that time when my dad was in his final years of his lives, and when he finally passed away. Makes me cry every time.
[1] https://www.youtube.com/watch?v=VMIL1YbUQrI
[2] https://www.discogs.com/master/782091-Juno-Reactor-Shango-To...
just to be clear, Moog synthesizers (and a number of other brands) are electronic yes, but they are analog electronics.
And just to clarify: I don't dislike electronic instruments. I just think that on some subconscious level the human brain can detect other humans playing a live instrument. Like there's something "embodied" in the sound that is likely missing from a pure electronic instrument. And I needed that element to "unlock" access to techno.
Tons of work has been done on various modes of humanization by trying to parameterize and modulate these aspects over time. Timing accuracy, velocity variance, chance, etc.
A well-played instrument certainly feels like someone speaking and expressing themselves to you. There are attempts to capture this with MPE instruments such as the Osmose, or Imogen Heap's MiMU gloves.
How up to date is this opinion of yours? Expression on guitar is pretty intuitive, but modern electronic instrument manufacturers have been working on this problem and created modes of expression that definitely solve this problem.
For example, EWIs allow you to use breath control for expression with many of the same techniques available on actual wind instruments. Also many synths now have features like polyphonic aftertouch, pitch/mod wheels, which allow you to add expression to a note while it is playing. Apps and hardware exist which allow you to use novel methods of capturing motion or other forms of expression. And most modern synths/midi controllers allow you to decide what parameters are affected.
> Then on top of all that it is so incredibly physical
That's an affectation. I can stand on my tiptoes and close my eyes when bending up a note on the synth the same as I can on the guitar. Neither affects the sound, and both are a conscious decision to project an appearance of "I'm really shredding"
> With an electric guitar you get the physicality and dynamism of an acoustic instrument with the complex timbres and extended technique possibilities of an electric/electronic instrument.
That can apply to any instrument once you "electrify" it. What makes a guitar more expressive than a cello or trumpet with a pickup/mic running through effect processing? I play guitar, keys and trumpet, and while I agree that a casio keyboard has limited expression options, your opinion doesn't sound researched.
The difference lies in the pickup! On those other instruments you will be using a contact mic (piezo-transducer) wheras the solid body guitar is using an inductive coil.
The contact mic is going to pickup only physical resonance whereas the the coil is measuring an electromagnetic field. Plucking the steel string induces a change in voltage in the coil. This means that the coil can pickup all sorts of interesting electromagnetic interference from the tube amplifier that is all frequency dependent and involve that in whatever feedback loops are occuring.
I certainly don’t agree with this as a musician who has tried most of these attempts by electronic music manufacturers.
A whammy bar?
For one, you can’t easily play two melodies simultaneously across several octaves, using both of your hands, with an electric guitar.
Stringed electronic instruments do have their advantages, but so do the others. Each music making thing has its place in the spectrum.
Two books that have helped me greatly in my musical life, in case people haven’t heard of them, are The Listening Book, and Bridge of Waves, by W.A. Mathieu.
If you're limiting to a 6 string guitar the distance between the two melodies would be limited compared to a piano but guitars don't have to be limited to 6 strings.
Classical guitar is full of this kind of thing.
Having taken piano lessons but being more into guitar I think the thing is almost all people who play piano are introduced to this and it is a core concept in far more piano music than guitar music. But it is not impossible on guitar, and many works for piano that get adapted to guitar require the player to do so.
E.x. there are plenty of players who have studied and played the Well Tempered Clavier on guitar.
Keyboards can approach that with polyphonic touch keys like the Hydrasynth (lean into keys, pressing them harder, for bending the tone in a configured patch), sustain pedals, and pitch bend/modulation controls, but not the nuanced touch of skin on a vibrating string.
I think synth guitars exist, too, but don’t know anything about them. The pedalboards are enough, maybe :)
Of course they exists, just listen to Pat Metheny. There are Midi hexapickups that can play any synth with MIDI and full expression.
Is that really true though? If I watch a cellist play I can pretty clearly see all the things they are doing and it will correlate neatly to the timbre of the sound.
Secondly I think it's important to note the tube amp and the guitar are seperable, and I don't think that their connection is particularly magical. I can reamp a sound from my synthesizer (or maybe a keytar?) into a guitar chain, and if I manipulate the mic and other controls in the same way I might manipulate the pickup, I can also get all manner of interesting feedback effects. My inputs will have different harmonic characteristics of course, and the tube amp's effects are mostly transformations of harmonics; you'll still get some cool tones and they will be subject to a lot of the same rules as if a guitar was being played.
> Secondly I think it's important to note the tube amp and the guitar are seperable, and I don't think that their connection is particularly magical. I can reamp a sound from my synthesizer (or maybe a keytar?) into a guitar chain, and if I manipulate the mic and other controls in the same way I might manipulate the pickup, I can also get all manner of interesting feedback effects.
The story is not quite so simple. Your synthesizer is going to have a buffered output so it wont have the complex impedance loading interactions with the amplifier as the guitar pickup.
This is actually critical to how early distortion effects such as the classic Fuzzface work and imo is essential for the kind of complex timbres you can produce with a guitar + tube amp.
In fact you can take an electric guitar, put a buffer pedal in the chain between your fuzz pedal and amp and completely destroy the ability to produce wild feedback and distortion.
I'm a guitarist, but there's nothing particularly magical about a high impedance signal, other than they tend to lead to noise and make really obnoxious things matter, like how low capacitance your cable is. Also, a TON of modern guitars are low(ish) impedance out because they use active pickups.
The pedals and system being dependent on the high impedance was always a bug, not a feature, and make the setup incredibly dependent on variables that really wouldn't be that hard to just buffer then recreate deterministically. Like, if your pedal should react to that impedance just buffer the front, put a big inductor (or a transformer using only half, or, - and I've actually seen this - just a whole guitar pickup) in the pedal. Then you're not dependent on the pickups of the guitar or the capacitance of cable or anything else and you can make sure the effect sounds good regardless of pickup type.
A Fuzz Face works the way it does because it actually gets affected by the guitar's impedance changing as you work the knobs on the guitar and pick differently. The Fuzz Face has minimal input filtering, the guitar's knobs actually change the bias of the first transistor IIRC and cause massive changes in sound.
If you stick a buffer in front of it that interaction is gone and there is nothing you can stick after the buffer to bring it back. You pretty much have to plug the guitar directly into a Fuzz Face for it to work as intended. There are even constant arguments about putting the Wah in front of the FF or after it. I'm not sure if the article even has it right or whether Hendrix did it differently at different times. Other articles show a different order of the effects.
There are other fuzz circuits that behave differently and work better with buffers and would be more uniform when used with other types of instruments or with electric guitars with active pickups (which are buffered).
E.x. I have a Tone Bender and have had several Fuzzes in the "Big Muff" category along with one that was based on the Fox Tone Machine. The Tone Bender and Big Muff can't clean up at all like the Fuzz Face via the guitar controls, and IIRC the Fox Tone Machine is somewhere in the middle. The Fuzz Face when setup correctly is really quite amazing as you can go crystal clear to crushing fuzz with your volume knob on the guitar. When you've tried it you realize Jimi Hendrix was doing it constantly in an amazing way.
That is going to be something like a transformer to step down your line level signal and some series resistance to match the load to help drive the amp.
An actual coil pickup has reactive impedance that is frequency dependent and will result in a more complex interaction between the devices.
> The pedals and system being dependent on the high impedance was always a bug, not a feature
Sure if you think like an engineer, but everything you are complaining about is what allows someone like Jimi Hendrix to do what he did with a guitar.
You can hear it particularly on "Where I End and You Begin" from Hail to the Thief. Ed O'Brien compliments its sound using an EBow (back before he had the sustainer) in that song.
Electric bass? Heck, even in synthesizers, you have the EWI or the Haken Continuum.
Guitar (and bass) are obviously and far and away the most successful, but it does a disservice to a number of wonderful inventions to say they're the only ones. Just look at what the Japanese band T-SQUARE does with the EWI to see people innovating at the edges.
But is it one of the most versatile instruments? You can do signal transforms with any kind of audio input, although it's done more with the electric guitar than any other instruments.
I would say it in practice, it has the most versatile sonic profile.
With the right interface, I think the synth can be more expressive. Look at the Haken Continuum or ExpressiveE Osmose - both can be used with something like the Expert Sleepers FH-2 to get MPE data to the modular.
I do see your point, and agree the amount of articulation you can do with guitar is hard to beat, but I do think a synth can win, if the setup is built for it.
I remembered learning about similar MIDI controllers when I was in school.
I often lament the lack of other electric instruments.
Look at Roli Seaboard, it has insane amount degrees of freedom / expression
https://youtu.be/2fQbtp2BgY4?si=S52A-22A3GlXPajU
past the middle starts solo
Synth music elevated electric bound tones to anything ever heard.
I remidn you that most of the rock and roll and rock music was about speed and mimicking the sound of a rumbling car engine, as it was a symbol of the freedom in America, being able to run away from your toxic communities to find yourself better anywhere else.
That was the message for the young with rock and roll: a speedy engine for your ears.
Electronic music was like replacing a car with UFO evoking you a space travel.
With the progressive subgenre of techno music you got the same feeling, but with no subtle hints. Heck, one of the most known songs in Spain ever, "Flying Free", literally remixes the sounds of drifting cars between the melodies, making the listener really happy in a very direct way as tons of youngs in the 90's got into the outskirt night clubs... by car. So they felt as driving an infinite highway rave with no end for days.
Few program synthesizers. Most just use presets. Infinite freedom is paralyzing. Building blocks are comfortable.
>Electromagnetic pickups—(...)—fixed the loudness problem. But they left a new one: the envelope
Was it really a problem to be solved? Good tube amplifiers already existed back then. Clean guiar tone was not something frowned upon.
>Hendrix’s mission was (...)
>His solution was (...)
I don't think Hendrix was on a 'mission' to solve engineering puzzles at all. He was just experimenting, as an artist.
1,000,000%. Guitar is one of those hobbies where people mythologize and build elaborate hagiographies around players they like and the gear that they used. Hendrix was a generational talent but I highly doubt he was sitting around enumerating problem statements and systematically exploring solution spaces. The Fuzz Face was one of like four dirtboxes available during that time so he chose that one. He flipped a guitar upside down because he could source one more easily than a lefty model. He leveraged feedback because he discovered it naturally and realized that he could make it sound totally badass.
The man clearly had a vision and executed it but his decisions were pragmatic, not the product of grand technical reasoning. It reminds me of the student who wrote a bunch of authors and asked to what degree they were conscious of the themes and symbolism in their work [1]. Many were not - as it turns out English teachers often put the cart before the horse. This is the rock and roll version of that.
I can't knock the article though as it has a lot of sound (pun intended) analysis in it as opposed to typical guitar forum dreck about NOS tube and hand-wired turret board magic.
[1] https://www.theparisreview.org/blog/2011/12/05/document-the-...
I've read that he claimed he played a right-handed guitar upside down because his father was superstitious and didn't like him doing things left-handed, so he'd play a right-handed guitar upside down most of the time and flip it over when he needed to play in front of his father. (I'm not sure why he didn't play a lefty guitar upside if that was the case, but I could imagine that the availability might be relevant like you mentioned, or maybe his father was familiar enough with guitars to be able to recognize a left-handed one and figure out what was going on, or maybe because he was better left-handed he could play it upside-down well enough but due to not being right-handed he would have found it more difficult to play it in the non-standard way).
I was speaking with my 14 year old nephew via messaging last month. It was about a deep topic, synthetic consciousness. He wrote such an intelligent reply that I asked him: hey, was this from an LLM? He was insulted. I did research with his parents and found out that 90% no, he's just a very smart kid.
Is there a name for this this mode of confusion yet?
The insulting didn't end there. You asked his parents! Even then you only landed at 90%, yet another insult because why can't he earn 100%? Ethical dilemmas on all sides!
I like some of the ideas in the article but there are some very "it wasn't just A, it was B" sentences in there. IEEE has a higher standard.
I think LLM's lack of "theory of mind" leads to them severely underperforming on narration and humor.
Seriously there's no LLM stuff in here. Only emdashed which were used in journalism decades before AI was even a thing.
I love how nonchalantly you threw this one in. I am proper jealous, how was it?
On your first remark, I agree. This is why I love Dire Straits and Mark Knopfler. The studio recordings are amazing, and then you listen to their live stuff and it's even better.
https://jimihendrixrecordguide.com/home-recordings/
(edit: syntax)
Voodoo Chile lyrics: "on the night I was born the moon turned fire red".
Poetic license? Stellarium reveals on the early evening of November 27, 1942 in Seattle, the moon was low on the horizon - just 25 degrees altitude at 5:30pm, directly East. The sun set at 5pm. While not a full moon it was 85%, so I'm calling it! The moon may have glowed a warm orange-red on the night (of the day) Hendrix was born.
I suppose if I were going to recommend a single episode to Hacker News though, it would be https://500songs.com/podcast/episode-146-good-vibrations-by-... which begins with at least a half hour on the amazing (if not happy) life of the guy who invented the Theremin, Lev Sergeyevich Termen.
What’s the newer song you mention with the flat fuzz?
The guy built his own guitar as a teenager and has played it for the rest of his career: https://en.wikipedia.org/wiki/Red_Special
While some try to make it as exact science, it is not, there are things you still cannot put a number on and it works ...
Considering the amount of watts involved in this, it probably qualifies as rocket science haha
JM: Yeah. The tremolo sound from the intro? That was four Fender Twin Reverbs. Myself controlling the speed of two of them and the producer controlling the speed of the other two. So two amps were recorded on one side of the stereo and the other two on the other side. I recorded the part on the tape without the tremolo, and then I sent the part from tape out to four amps, and he controlled two, and I controlled the other two.
And it took a long time because inevitably the tremolo would go out of time with the track because the tremolo doesn't stay in regular clock time. Also we would go out with each other's amps, so we had to keep looking up at each other after every fifteen second bursts and kind of fess up, "Oh yeah, mine kind of went out of time." It took long time, but I'm glad we did it that way because if we had cut and pasted two seconds of audio, it wouldn't have had the same dynamic quality throughout the six minutes of the song, or however long it is.
https://www.westword.com/music/johnny-marr-on-how-he-created...
DSP, Control Engineering, Circuit Design, understanding pipelining and caching, and other fundamentals are important for people to understand higher levels of the abstraction layers (eg. much of deep learning is built on top of Optimization Theory principles which are introduced in a DSP class).
The value of Computer Science isn't the ability to whiteboard a Leetcode hard question or glue together PyTorch commands - it's the ability to reason across multiple abstraction layers.
And newer grads are significantly deskilled due to these curriculum changes. If I as a VC know more about Nagle's Algorithm (hi Animats!) than some of the potential technical founders for network security or MLOps companies, we are in trouble.
Isn't Nagle usually introduced in a networking class typically taken by CS (non-CE/EE) undergrads?
Just because EEs are exposed to some mathematical concepts during their training doesn't mean that non-EEs are not exposed through a different path.
Networking, OS, and Distributed Systems is increasingly treated as CompE or even EE nowadays in the US.
> Just because EEs are exposed...
That's the thing - I truly do not believe that EE and CS should be decoupled, and I believe ECE as a stopgap is doing a disservice to the talent pipeline we need for my verticals to remain in the US, especially when comparing target American CS and EECS programs to peer CEE, Indian, and Israeli CS programs [0].
There is no reason that a CS major should not be required to take a summary circuits, DSP, computer architecture, and OS fundamentals course when this is the norm in most CS programs abroad. Additionally, I do not see any reason for EEs and ECEs to not take Algorithms, Data Structures, and Compilers as well.
> Just because EEs are exposed to some mathematical concepts during their training doesn't mean that non-EEs are not exposed through a different path
Mind you, I'm primarily in Cybersecurity, AI/ML infra, DefenseTech, and DeepTech adjacent spaces - basically, anything aligned with the "American Dynamism" or Cyberstarts thesis.
From what I've seen, the most successful founders are those who are able to adeptly reason and problem solve, but are also able to communicate to technical buyers because you are selling a technical product where those people make the decision.
Just because an approach isn't useful today doesn't necessarily imply it isn't in the future and being exposed to those kinds of knowledge and foundational principles makes it easier for one to evaluate and reason through problem spaces that are similar but not necessarily the same - for example, going to the Nagle's example - this was a bog standard networking concept that has now become critical in foundation model training because interconnect performance is a critical problem which can impact margins.
A lot of foundational knowledge is useful no matter what, and is why we fund founders and hire talent at competitive salaries.
Yes, you need solid fundamentals. I am saying CE/EE is not where you get them unless you actually are designing circuits. If you're doing the hard tech like you're describing, take the more academically rigorous program of CS instead of the easy CE/EE program where you learn irrelevant skills like circuit layout (that is now done by an AI anyway!) rather than very relevant skills like OS and networking, and algorithms which is not required by EE.
I've never needed or benefited from most of the EE curriculum. There is an opportunity cost in learning things you don't need.
I know at MIT it was (and I think still is) one major - EECS, and students had substantial latitude on how much they wanted to concentrate into hardware or software at least after the intro courses.
Jimi was left handed
Artistic endeavors come from lots of places, not just people with an analytical mindset. Historically those two are seen as opposing tendencies, which I think is unfair, but it points to the importance of intuition and navigating perception and emotion for artists.
Music theory, Nashville notation
> attempts to model elements of the problem?
Ditto
> data gathering and data analysis?
Listening to a wide variety of music and understanding what make a genre a genre
> simulation?
Cover songs, writing to a style
> Intentional application of principles of physics or some other pure domain to a real world problem?
Literally sound engineering
I’m curious because to tell you the truth the novelty struck me as similar to comparisons I’ve toyed with using LLMs on my own. The AI-generated logic between comparing two dissimilar things is too sterile for my liking.
I understand that this is appearing in technical publication, but for some reason that invites even further scrutiny on my behalf.
Please share more reasons behind your suspicions.
> and the component was the Octavia guitar pedal, created for Hendrix by sound engineer Roger Mayer.
So, Roger was the engineer. And, Jimi was the artist.
And if we can call ourselves software engineers, where our day-to-day (mostly) involves less calculus and more creative interpretation of loose ideas, in the context of a corpus of historical texts that we literally call "libraries" - are we not artists and art historians?
We're far closer to Jimi than Roger, in many ways. Pots and kettles :)
Just because I drive my car with immense focus, make precision shifts, and hit the apex of all of my turns when getting onto and off of the freeway doesn't make me a race car driver.
Engineers don't just feel good vibes about science and mix it into their work. It is the core of their work.
Simply having a methodology absolutely is not sufficient for being an engineer.
And great, you have an arbitrary system of ethics, like everyone does I imagine. But no one holds you to these ethics.
Further down there is a sentence: "First, the Fuzz Face is a two-transistor feedback amplifier that turns a gentle sinusoid signal into an almost binary “fuzzy” output." But the figure does not match this - there is no "gentle sinusoid" wave shown on the first fuzz face plot.
Any guitarist in a 1940s big band would have a big hollowbody guitar and an amp. That combination is incredibly prone to feedback. Everyone worked to reduce feedback and avoid it. That's what I do with my hollowbody when I play with a big band. It's the first thing that happens when you turn up.
Hendrix did not "discover" feedback, and in fact he did not discover the musical uses of feedback - you can hear it in BB King records that predate Hendrix, where feedback makes his notes "sing."
What Hendrix did was turn feedback into an intentional musical creation that he treated as a melodic voice.
One of the great things about a hi-gain setup like Hendrix's is how the feedback loop will inject an element of controlled chaos into the sound. It allows for emergent fluctuations in timbre that Hendrix can wrangle, but never fully control. It's the squealing, chaotic element in something like his 'Star Spangled Banner'. It's a positive feedback loop that can run away from the player and create all kinds of unexpected elements.
The art of Hendrix's playing, then, is partly in how he harnessed that sound and integrated it into his voice. And of course, he's a force of nature when he does so.
A great place to hear artful feedback would be the intro to Prince's 'Computer Blue'. It's the squealing "birdsong" at the beginning and ending of the record. You can hear it particularly well if you search for 'Computer Blue - Hallway Speech Version' with the extended intro.
You know, I've heard that performance so many times over so many decades that I don't have to hit a play button or even close my eyes in order to hear it. It's there inside my head when I want it to be.
And somehow I never interpreted it in that way (sirens, screaming, etc) until just a moment ago. I thought it was just a quirky little early-morning break in the familiar tune from someone who had been up way too long by that point.
And now instead of just being the quirky sounds of an impromptu guitar solo that I can recall whenever I wish, it now has unpleasant pictures to go with it.
Thanks (I think).
I thought it was sheer genius that Hendrix was able to subtly bring that into the national anthem which made it resonate so well with those purchasing his music. But without that background reference I never supposed that younger generations would hear it entirely differently.
Slightly off-topic--
Before my time, but my professor* recalled to our class his experience watching a _live_ news report from Vietnam. Something shocking happened during the broadcast. As a visual-media scholar he contacted the station to obtain a copy. No go. He remarked how he never saw that footage ever again (at that time it would have been over 15 years ago). In our modern digital age it's difficult to imagine anything going live to the nation, and then disappearing.
* (Charles Chess, Introduction to Film, SJSU, c1992)
The Epstein files would like a chat with you.
As would "flood the zone".
There was one scene where Rich Burton and Elizabeth Taylor were arguing with each other. I watched their lips move, and somehow I heard Burton speaking his lines in his voice, and Taylor her lines in her voice. I had to do a double take to see that the sound was actually muted, but my mind re-created it anyway.
BrainOS 1.1> Optimize Memory (Y/N) __
I can tell when a musician is lip syncing their hit song, because nobody sings a song the same way twice, and the performance exactly matched the CD version of the song.
Instead, I can recall the complete works of Roger Waters or Nine Inch Nails, but not the names of the songs unless I really studied that part. I can recall themes from TV shows from decades ago, but be unable to place the name of the show.
At any given time, anywhere at all, I can listen to any of at least five different covers of Fat Bottomed Girls -- and have no idea who performed any of them, and therefore no ability to share them with others.
It's an interesting way to be and it is the only way I know, but there's reasons that I'm terrible at being a DJ.
Like a jealous plumber, worried that Kim Kardashian's "Break the Internet" photo series will take away from his appeal, hurriedly posting photos of his plumber's crack online...
I had heard it a lot in punk and pop-punk to create swells. I improvised my still-favorite solo that day.
The discovery of feedback tones and the resulting incorporation in the musical experience — a three hour warm bank of tubes turned up to the limit with a maxxed out savant unlocking new realms of sound.
One thing for me to notice is his playing does not require a rhythm guitarist. I discovered that what worked well is Mitch Mitchell as a Jazz drummer his playing was heavily influenced by classics. In a way it complemented Jimi's guitar tone so well.
That would have blown the doors off of everything.
I don't think there was another as "out there" guitar player as Jimi until EVH came along - a little more controlled, but just as confident and chaotic. EVH was quite the systems engineer himself (variac, Floyd Rose later on etc)
Miles always impressed me with his ability to pick the best to back him up, and /then/ let them take the front. Some tracks he barely plays on, waiting minutes for his entry.
Jimmy wanted the best to back him up. But I agree with you; I'm just pointing out why I think he didn't.
In Davis' autobiography, he mentions trying to work with Jimi. I don't think it would have worked really, but who knows. Jimi was completely self taught, while Miles went to Juliard, I don't see how they would have communicated musically, literally. Like, if Miles tells Jimi to try a diminished chord here, or some modal scale there, Miles would have ended up doing a LOT of teaching along the way. And I say this as a guitarist of 30+ years who loves both of them.
Yeah that's a good insight.
Hendrix’s girlfriend Kathy Etchingham claims he never abused her. Some third parties dispute her claims about her relationship.
His arrest record suggests at least some type of altercation with a previous girlfriend but it’s far from clear cut to me.
People are complex and reality is complex. I myself was subject to false accusations about abuse from a disgruntled ex girlfriend (who actually WAS in fact physically and mentally abusive to me and I have the scars to prove it).
But regardless, I have zero issues reflecting on a person’s accomplishments and talents even in the context of them being a horrible person. In fact, I find that part of the intrigue of really talented people. Reality and people are quite multi-dimensional. The only general rule I know is that nobody is perfect and holding up ANYone as some example of moral perfection is almost certainly wrong.
OTOH: why does his accidental fatal pathway tarnish him morally to you?
Very mixed message: "Don't beat women nor vomit!"
Also maybe not until the night of his first big gig there.
Townshend had Marshall build 100 watters so he could play louder clean, Clapton had already been cranking it with a Gibson SG which is a characteristic sound all its own, he was in the audience at the gig and was blown away watching Hendrix.
Every year from at least 1964 to 1984, more advanced amps were made than ever existed before.