The 8K "D channel" (I've never heard it put in those terms) is actually used
for the A/B bit signalling on a standard T-1 running D4 framing with robbed bit
signalling. These A/B bits are actually used for sending on/off hoom
information but really 8K was never needed. Instead the 8K figure came out of
an old AT&T specification on T-1 repeater timing regeneration. The spec
(82411) said that there could never be 8 consecutive 0 bits in a row on a data
stream or inline repeaters could drift out of range and erros could occur. We
are obviously talking about the days of ringing tank circuits being used to
generate T-1 timing pulses. Thus by "robbing" every 8th bit out of voice
channels (i.e. 7/8 * 64), the 56kbs figure came into being and since voice
channels needed on/off hook information, some of these bits could be used for
sending this info and forcing the rest to a 1, maintained the 8 bit non-zero
rule. The idea was the LSB being stolen wouldn't have any perceivable impact
on the S/N ratio (signal to noise) that users would talk across. For data this
can be another story though since 8 bits have a higher over digital sampling
rate, the S/N ratio becomes lower because less sampling inaccuracies occur.
Thus theoretically, a 64k sampled channel will be somewhat quieter than a 56K
smapled channel. Nequists <sp ?> theorm can be used to calculate the noise
differential. of course this theorm (a theorm which was used to
describe/predict the effect of PCM/PAM sampling on an analog waveform) is
slowly being disproven as the speeds of analog modems continue to increase.
Sorry if I got carried away. I used to teach this stuff and miss it a
little...
Jeff Binkley
ASA network Computing