Hi Alan,


Here are my responses to issues in your 21 Apr and 23 Apr mail.


> 1. LVDS Connector Pinout

> Issue: Dick Ferris has suggested a pinout different from the 9 Feb draft

> document.  This new suggested pinout is based on the use of a specific


> device.  The Japanese group have already begun implementation of the 9


> pinout.

> Discussion:  Although Dick's is more optimal for a specific LVDS


> device, the 9 Feb pinout has been chosen to be generically 'good'  in a

> general engineering sense, is likely to be workable, and does not


> the use of future LVDS interface devices.


It is a matter of no little irony that the optimisation, such as it is,

which favours the 8 and 16-bit chips instead of one which favours the

4-bit family, was already manifest in Table 2 of the *29 Jan* Draft.

This was explained to us at the Haystack meeting by Will Aldrich when he

introduced the new chips, and accepted without demur.


There is no change in this sense in the 9 Feb Draft, so any perceived 

problems or "future constraints" persist there, as well as in the Japanese

group's prototypes.


How real then, are these concerns?



My suggested pinout took Will's optimisation as a given, and proceeded on

two fronts.


The first was to regroup the 35 registered signals so they map neatly into

the minimum number of chips.  Apart from the obvious economy this affords,

detailed designs for the higher VSI frequencies show that this is

extremely desirable in order to meet the timing specifications.  This

arrangement is *inclusive*, not exclusive, since obviously one 16-bit chip

can be exactly replaced by four quads, eight duals, or even sixteen

singles.  The converse is not true.


The second was to allocate adjacent rather than opposite pins to each

signal pair, in order to minimise impedance discontinuity and

cross-coupling in the interconnects.  Such a configuration is used in both

of the dominant LVDS standards using MDR connectors, namely DFP developed

by VESA, and FPD Link developed byNat Semi.  Both are endorsed by

DISM/JEIDA in Japan.

> 2. Bi-directional  LVDS Signals

> Issue: In his notes of 3 Mar 00, Dick Ferris has suggested the use of

> reverse-channel functions.  The 9 Feb draft specifies only unidirectional

> signals on LVDS cables.

> Discussion:  The possible use of bi-directional signals on the LVDS cables

> was a *major* discussion item at the Haystack meeting in late January.  In

> the end we agreed that the signals on each LVDS cable would be

> uni-directional.  That is *the* reason that DPS1PPS and DPSCLOCK are on a

> separate cable to the DOM.  Despite the fact that reverse-channel signals

> do provide some possible advantages, I think we must stick to the January

> agreement for the base VSI specification.


My proposal does *not* contradict this agreement.  Item 2c.1 "Reverse

Channels Function" does not contain any bidirectional signals, nor does it

specify hardware which could support them.  The *only* effect of 2c.1 on

the base VSI specification is to define the (normal, ie forward) control

signal P/QCTRL.


This signal is in the same category as ROT1PPS which is reserved for the

proposed delay extension, and P/QDATA which is reserved (so far) for no

defined function whatsoever.


> As Rick points out in his note

> below, a reverse-channel capability could still be built into the DIM/DOM

> using bi-directional driver/receiver chips, but that they must be

> configurable to meet the uni-directional  VSI-H specification.


This is precisely what is provided in *optional extensions* 2c.2 and 2c.3.


We note:


  Inclusion of integrated DPS clock signals into a DOM does not displace

  DPSCLOCK and DPS1PPS from the Aux connector : these are basic and must

  be provided on all conformant DOMs.


  Exclusion of any optional extension does not by definition reflect on

  the conformance status of a DTS.


  A DOM not supporting bidirectional signals would sensibly have its QCTRL

  hard-wired in the default/"1" state, so that even if it was ever

  connected to a DPS which did, the reverse channel transceivers could

  never be activated.


  There is no change in the forward characteristics of any channels.


  With QCTRL set to "1" the two spares QSPC and QSPD are free to be

  utilised in the normal direction for any other optional extension.


In short the existence of bidirectional options is totally transparent to

those who do not wish to use them.


On the other hand the inclusions of such options in the Standard enhances

interchangeability of equipment between systems that do chose to use them,

which is in the very spirit of VSI.  (Integrated DPS Clock Signals [2c.2]

are currently used and have popularly support, Integrated ALT1PPS Signal

[2c.3] is a suggested solution to problems already identified for TVG and

PDATA operation.)


Overall it seems to be a win-win-win solution : a win for those who do not

want to use bidirectional signals on these particular cables, a win for

those who do, and a win for VSI.  It is hard to do better than that.


> 3. Delay in DOM/DIM

> Issue: A delay option in the DOM was suggested in the 9 Feb draft after

> some discussion at the January meeting.  It was probably my fault that it

> was vaguely defined, which left the issue rather unclear.  Since then,

> there have been various opinions expressed regarding its implementation,

> and Rick suggests adding a 'delay' option to the DIM, as well.

> Discussion: Adding a fixed delay capability is clearly useful to help in

> the case of limited size of correlator input buffers.  Anything fancier,

> such as delay tracking, has very correlator-specific requirements that are

> very difficult to accomodate.


A DOM delay function is common if not universal practice now and as such

is an obvious candidate for inclusion in (base) VSI.  Delay over almost

any range is a natural feature easily provided in storage (tape, disk

etc.) type DOMs, so the suggested +/-0.5sec range spec is fine there. 

However the situation is quite different in network type DOMs.


Here the function requires a discrete buffer capable of accommodating total

throughput at the maximum rate for the full range of delay.  For a one

second range as above this amounts to a 128MByte dual port memory with

31ns access for 32Mbps bit-streams, rising to 512MBytes of 8ns dual port

memory for 128Msps data.


Such buffers are very expensive and in no way justified by the science in

normal circumstances, so we should either trim the range spec to a

sensible minimum, or provide two different specs.


How much delay is actually needed?  Starting with an actual network

system, the KSP Real-time VLBI System (Kiuchi et al, JCRL vol. 46 No.l,

March 1999 pp. 83-89), the "DTS's", ATM terminals, provided a mere 4800

bytes of data space to sort four 256Mbps data streams, ie a few

microseconds worth.  This is insignificant and is really internal to the

transmission process anyway.


The real delay is provided in the KSP correlator as  normal. It has

a range of +/- 5 milliseconds to accommodate the observing geometry,

transmission delays and clock offsets.  This figure is unusually small

because of the relatively close telescopes but it provides a useful upper

bound on the expected clock offsets (usually < 1ms in practice).


For storage type only DTS's and terrestrial telescopes, +/-5ms for

clock models added to the 0 to -22ms delay required to align all data at

the geocentre, gives an untidy but practical minimum delay range of +5 to



This corresponds to exactly 128kBytes per bit-stream (4MB total) at 32Mbps

for a network type DTS.  By itself this would provide for a reasonable

sized array, but for global arrays a much larger range is required to

accommodate differential transmission times.  eg. half a great circle

requires n*70ms where n is the effective velocity factor for the network.


Also SVLBI is firmly with us, implying much longer symmetrical geometric

corrections. eg VSOP (apogee 21000km) +/- 90ms, TDRS (geostationary) +/-

140ms, RadioAstron (proposed 75000km) +/- 270ms, Moon +/- 1.27s.


Where to draw the line?  Perhaps the "minimum practical" +5 to -27ms for a

unified specification, on the expectation that larger delays will be

custom fitted when and as required?


> 4. Bit-stream selection and re-ordering

> Issue: We have agreed that any 2**n input bit-streams at the input to the

> DIM can be selected for transmision to the DOM.  In addition, we have

> agreed that arbitrary bit-stream re-ordering at the DOM output is

> necessary.  Dick Ferris has also suggested re-ordering in the DIM.

> Discussion: The benefit of re-ordering in the DIM is not clear to

> me.  Arbitrary bit-stream re-ordering at the DOM output is sufficient to

> cover all bases.


The DIM needs a bit-stream re-ordering *function* to be exercised prior to

data being acquired from the DAS.  This was debated before and incorporated

in the Nov 3 Draft, and the reasoning is in that correspondence.


It doesn't really matter where or when *physical* stream switching occurs. 

eg. the re-ordering necessary to map DAS data onto the correct DTS

bit-streams could simply be transmitted along with the data and given

effect in the DOM crossbar during playback.  I believe this is what Alan

has in mind.  In this case the actual crossbar function would be the DIM

mapping combined with any extra mapping due to the DOM re-ordering

function if utilised at that time.


This process should be invisible to the user, and may save a little

hardware in the DIM, though I suspect not much.

> 5. LVDS Cable Specs Issue: The detailed specification of cable is very

> complicated. Discussion:  Dick has done a very admirable job of

> creating a creating a suggested cable spec, but the Japanese point out

> the difficulties of making objective measurements of some parameters.


I have not seen any details of what these difficulties are.  Given the

basic nature of the specifications plus the accompanying Notes and

Measurement Procedures it was hoped that this process was self evident.


I will be glad to address any specific problems raised.


> Recommendation:  I suggest that the primary specification for the

> cable is that it must deliver proper LVDS signals (i.e. meet the

> detailed electrical specs) at all times and under all conditions when

> a compliant DOM output is connected to a compliant DIM input, which

> are electrically well defined.


As noted in the accompanying "Comments" doc this is the formal

requirement, but as such is effectively impossible to implement because of

need to test with a near infinite set of compliant transmitters and

receivers.  Hence the "more practical if less direct specifications" which

provide engineers a finite and independent set of measurements to work



> 9. Serial control port spec

> Issue:  The 9 Feb draft proposed is not fully internally consistent in the

> specification standards.


> For the selection of the RJ45 pin assignment for the serial

> communication, we are suggesting to use DB9 or DB25 to avoid

> a possibility to plug-in Ethernet cable to the comm-port

> by accident. It will also avoid confusion between two possible

> pin-assignment to the RJ45 connector.


I support this suggestion from Koyama-san.  We can go with TIA/EIA-574 on

the DB9s, computer as DTE, DTS as DCE.  A whip around both shops and

business suppliers showed that 574 type "RS-232C" was marginally more

popular than 561, and cables are more readily available.  (Both formats

are soon to be subsumed by USB anyway).  PCs are if nothing else

consistent about their DB9 connectors so there should be no need for

null-modem cables.

> 1. LVDS Receiver

> Issue: Dick Ferris suggests that a receiver with no input should


> to logic '1'.  Other suggest that the fail-safe state should simply be a

> stable '0' or '1', with no preference.


It's going to be "1" anyway because that is part of TIA/EIA-644, and

examination of the current receiver data sheets will show that it has been

implemented as part of the "fail-safe" specs.  The specification was

included in my text so that designers and testers could know and rely on

this characteristic.  Rick has also commented on this aspect.


The matching definition of a default value of "1" for the

transmitter outputs goes way back to CCITT V.1 and has been adopted

consistently in numerous international interface standards ever since.

Combined with the receiver characteristics above it also makes for

transition-free cable connection and disconnection.


Given that no advantage has been claimed for "0" or random default

states this point should stand.