Published on October 24, 2025 2:03 AM GMTOverview:There is an interesting mechanism GPT-2 appears to use to measure distances between duplicate tokens. The mechanism reminds me a lot of Twisted pair cabling in communications. The mechanism is fiddly to explain in context, so i’ve tried to abstract out most of the details, and give a clean toy version of the mechanism. I think some of the structures for this mechanism could be used in other contexts than computing distances.Setup:We have the set of points {xi}ni=0∈Rd⊂Rdmodel, with xi=f(i/n) where f:[0,1]→Rd is a smooth function. We take as input xi, xj with |i−j|≤k<<n . We want to construct a transformer which can estimate |i−j| given xi,xj. For this transformer, we additionally assume that d is relatively small compared to the embedding dimension dmodel.The mechanism:We set up M+1<n “sentry points” {si}Mi=0 uniformly along [0,1]. And we define g:[0,1]→[0,M]∩Z sending t∈[0,1] to the index of the closest sentry point.Then we have, ||xi−xj−(in−jn)f′(sg(in))||2=||∫injnf′(t)−f′(s)dt||2≤L(12M+|i−j|n)⋅|i−j|nwhere L is such that ||f′(x)−f′(y)||2≤L|x−y| for all x,y∈[0,1].So |(xi−xj)⋅f′(sg(in))||f′(sg(in))||22−(i−j)|≤||∫injnf′(t)−f′(s)dt||2||f′(sg(in))||2≤L||f′(sg(in))||2(12M+2+|i−j|n)⋅|i−j|n.Therefore if we can approximate (xj−xi)⋅f′(sg(in))||f′(sg(in))||22, then we can approximate j−i.Attention mechanism:If we are given a two token input to the transformer xj, xi, then assuming that d≤dhead, two attention heads is sufficient to compute xj−xi (have one head which outputs -xi, and the other xj). We write xj−xi to an orthogonal subspace from xi so that the MLP can cleanly access xi later.MLP mechanism:The MLP mechanism consists of 2M+2 neurons, with a pair of two neurons associated with each sentry point.For each sentry point sm∈{0,1M+1,2M+1,…,1}, we define a neuron pair:Neuron+m=ReLU(bm,1f(sm)⋅xi+bm,2+bm,3(xj−xi)⋅f′(sm))Neuron−m=ReLU(bm,1f(sm)⋅xi+bm,2−bm,3(xj−xi)⋅f′(sm))where Wm, bm,l are tuned so that bm,1f(sm)⋅xi+bm,2>ϵ when m=g(in), and bm,1f(sm)⋅xi+bm,2<0 otherwise. Additionally we set up bm,3(xj−xi)⋅f′(sm) so that it has a magnitude of less than ϵ when |i−j|≤k.Example sentry neuron activations when xi=xj with M = 5. Each different colour corresponds to the activation of a different sentry neuron. We can pick M+1 coprime to n so that the sentries don’t vanish at in for any i, and so they are bounded above by some ϵ>0. A signal of magnitude ϵ can be encoded in the difference between the activations of pairs of these sentry neurons.Setting up these sentries relies on f not coming close to intersecting itself, so that the dot product with f(sm) is only high on a connected interval.We wrote xj−xi in an orthogonal subspace to xi so the sentry neurons can all be written in the standard form ReLU(wth0+b) where h0 is the residual stream post attention.We then output Cmbm,3(Neuron+m-Neuron−m)v from the mth neuron pair.Under this construction Neuron+m and Neuron−m always activate at the same time as each other, so the output of the mth neuron pair is 0 if m≠g(in), and Cm(xj−xi)⋅f′(sm)v if m=g(in).Since only a single neuron pair activates at once, the complete output of this MLP layer is Cg(in)(xj−xi)⋅f′(sg(in))v. Then setting Cg(in) proportional to 1||f′(sg(in))||22, we get an output proportional to (j−i)vExtensions of mechanism:The above mechanism is clean, and captures the key ideas. A nice thing about the mechanism is that the f(sm)⋅xi term can be noisy, but it doesn’t matter because the common noise will get cancelled out, similar to twisted pair encoding. However, there can still be potential issues caused by noise at the transitions between sentries. It is also not robust to f intersecting itself, and the number of sentries required grows with n. There are cases where this kind of two neuron interference cancellation could come in useful outside of computing distance. For example, if you want to distinguish between British and Canadian English, you could have:Neuroncanada=ReLU(Commonwealth+ϵ(Canadian))Neuronbritish=ReLU(Commonwealth+ϵ(British))And then take the difference between the two. The interference that would usually make it difficult to distinguish between the two very similar dialects gets cancelled out.Though this is probably just PCA??Discuss Read More
How transformers can compute distances along a curve locally.
Published on October 24, 2025 2:03 AM GMTOverview:There is an interesting mechanism GPT-2 appears to use to measure distances between duplicate tokens. The mechanism reminds me a lot of Twisted pair cabling in communications. The mechanism is fiddly to explain in context, so i’ve tried to abstract out most of the details, and give a clean toy version of the mechanism. I think some of the structures for this mechanism could be used in other contexts than computing distances.Setup:We have the set of points {xi}ni=0∈Rd⊂Rdmodel, with xi=f(i/n) where f:[0,1]→Rd is a smooth function. We take as input xi, xj with |i−j|≤k<<n . We want to construct a transformer which can estimate |i−j| given xi,xj. For this transformer, we additionally assume that d is relatively small compared to the embedding dimension dmodel.The mechanism:We set up M+1<n “sentry points” {si}Mi=0 uniformly along [0,1]. And we define g:[0,1]→[0,M]∩Z sending t∈[0,1] to the index of the closest sentry point.Then we have, ||xi−xj−(in−jn)f′(sg(in))||2=||∫injnf′(t)−f′(s)dt||2≤L(12M+|i−j|n)⋅|i−j|nwhere L is such that ||f′(x)−f′(y)||2≤L|x−y| for all x,y∈[0,1].So |(xi−xj)⋅f′(sg(in))||f′(sg(in))||22−(i−j)|≤||∫injnf′(t)−f′(s)dt||2||f′(sg(in))||2≤L||f′(sg(in))||2(12M+2+|i−j|n)⋅|i−j|n.Therefore if we can approximate (xj−xi)⋅f′(sg(in))||f′(sg(in))||22, then we can approximate j−i.Attention mechanism:If we are given a two token input to the transformer xj, xi, then assuming that d≤dhead, two attention heads is sufficient to compute xj−xi (have one head which outputs -xi, and the other xj). We write xj−xi to an orthogonal subspace from xi so that the MLP can cleanly access xi later.MLP mechanism:The MLP mechanism consists of 2M+2 neurons, with a pair of two neurons associated with each sentry point.For each sentry point sm∈{0,1M+1,2M+1,…,1}, we define a neuron pair:Neuron+m=ReLU(bm,1f(sm)⋅xi+bm,2+bm,3(xj−xi)⋅f′(sm))Neuron−m=ReLU(bm,1f(sm)⋅xi+bm,2−bm,3(xj−xi)⋅f′(sm))where Wm, bm,l are tuned so that bm,1f(sm)⋅xi+bm,2>ϵ when m=g(in), and bm,1f(sm)⋅xi+bm,2<0 otherwise. Additionally we set up bm,3(xj−xi)⋅f′(sm) so that it has a magnitude of less than ϵ when |i−j|≤k.Example sentry neuron activations when xi=xj with M = 5. Each different colour corresponds to the activation of a different sentry neuron. We can pick M+1 coprime to n so that the sentries don’t vanish at in for any i, and so they are bounded above by some ϵ>0. A signal of magnitude ϵ can be encoded in the difference between the activations of pairs of these sentry neurons.Setting up these sentries relies on f not coming close to intersecting itself, so that the dot product with f(sm) is only high on a connected interval.We wrote xj−xi in an orthogonal subspace to xi so the sentry neurons can all be written in the standard form ReLU(wth0+b) where h0 is the residual stream post attention.We then output Cmbm,3(Neuron+m-Neuron−m)v from the mth neuron pair.Under this construction Neuron+m and Neuron−m always activate at the same time as each other, so the output of the mth neuron pair is 0 if m≠g(in), and Cm(xj−xi)⋅f′(sm)v if m=g(in).Since only a single neuron pair activates at once, the complete output of this MLP layer is Cg(in)(xj−xi)⋅f′(sg(in))v. Then setting Cg(in) proportional to 1||f′(sg(in))||22, we get an output proportional to (j−i)vExtensions of mechanism:The above mechanism is clean, and captures the key ideas. A nice thing about the mechanism is that the f(sm)⋅xi term can be noisy, but it doesn’t matter because the common noise will get cancelled out, similar to twisted pair encoding. However, there can still be potential issues caused by noise at the transitions between sentries. It is also not robust to f intersecting itself, and the number of sentries required grows with n. There are cases where this kind of two neuron interference cancellation could come in useful outside of computing distance. For example, if you want to distinguish between British and Canadian English, you could have:Neuroncanada=ReLU(Commonwealth+ϵ(Canadian))Neuronbritish=ReLU(Commonwealth+ϵ(British))And then take the difference between the two. The interference that would usually make it difficult to distinguish between the two very similar dialects gets cancelled out.Though this is probably just PCA??Discuss Read More