Re: [RFC] yamldt v0.5, now a DTS compiler too

From: David Gibson
Date: Tue Oct 10 2017 - 23:50:03 EST


On Tue, Oct 10, 2017 at 06:19:03PM +0300, Pantelis Antoniou wrote:
> Hi David,
>
> > On Oct 10, 2017, at 04:50 , David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > On Mon, Oct 09, 2017 at 06:07:28PM +0300, Pantelis Antoniou wrote:
> >> Hi David,
> >>
> >>> On Oct 9, 2017, at 03:00 , David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> wrote:
> >>>
> >>> On Sun, Oct 08, 2017 at 04:08:03PM -0700, Frank Rowand wrote:
> >>>> On 10/07/17 03:23, Pantelis Antoniou wrote:
> >>>>> Hi Rob,
> >>>>>
> >>>>>> On Oct 6, 2017, at 16:55 , Rob Herring <robherring2@xxxxxxxxx> wrote:
> >>>>>>
> >>>>>> On Tue, Oct 3, 2017 at 12:39 PM, Pantelis Antoniou
> >>>>>> <pantelis.antoniou@xxxxxxxxxxxx> wrote:
> >>>>>>> Hi Rob,
> >>>>
> >>>> < snip >
> >>>>
> >>>>>>> eBPF is portable, can be serialized after compiling in the schema file
> >>>>>>> and can be executed in the kernel.
> >>>>>>
> >>>>>> Executing in the kernel is a non-goal for me.
> >>>>
> >>>> Executing in the kernel is an anti-goal for me.
> >>>>
> >>>> We are trying to reduce the device tree footprint inside the kernel,
> >>>> not increase it.
> >>>>
> >>>> 99.99% of the validation should be possible statically, in the compile
> >>>> phase.
> >>>>
> >>>>
> >>>>>>> By stripping out all documentation related properties and nodes keeping
> >>>>>>> only the compiled filters you can generate a dtb blob that passed to
> >>>>>>> kernel can be used for verification of all runtime changes in the
> >>>>>>> kernel's live tree. eBPF is enforcing an execution model that is 'safe'
> >>>>>>> so we can be sure that no foul play is possible.
> >>>>
> >>>> Run time changes can be assumed correct (short of bugs in the overlay
> >>>> application code), if the base tree is validated, the overlay is validated,
> >>>> and the interface between the live tree and the overlay is a
> >>>> connector.
> >>>
> >>> In addition, no amount of schema validation can really protect the
> >>> kernel from a bad DT. Even if the schemas can 100% verify that the DT
> >>> is "syntactically" correct, which is ambitious, it can't protect
> >>> against a DT which is in the right form, but contains information that
> >>> is simply wrong for the hardware in question. That can stuff the
> >>> kernel at least as easily as an incorrectly formatted DT.
> >>>
> >>
> >> I disagree.
> >>
> >> There are multiple levels of validation. For now weâre only talking about
> >> binding validation. There can be SoC level validation, board level validation,
> >> revision level validation and finally application specific validation.
> >>
> >> Binding validation is making sure properties/nodes follow the binding document.
> >> For instance that for a foo device thereâs a mandatory interrupt property.
> >>
> >> Simplified
> >>
> >> interrupt = <X>;
> >>
> >> Binding validation would âcatchâ errors like assigning a string or not having the
> >> interrupt property available.
> >>
> >> SoC level validation would list the available interrupt number that a given
> >> SoC would support for that device.
> >>
> >> For example that interrupt can only take the values 10 or 99 in a given SoC.
> >>
> >> Board level validation would narrow this down even further to a value of 10 for
> >> a given board model.
> >
> >> Similar revision level validation would place further restriction on the allowed
> >> configuration.
> >>
> >> Finally application specific validation could place restriction based on the intended
> >> application that piece of hardware is used for. For instance devices that should not
> >> exceed a given power budget would have restrictions on the clock frequency of the processor
> >> or bus frequencies etc.
> >
> > This doesn't help. In order to do this, the validator would need
> > information that's essentially equivalent to the content of DT, at
> > which point there's no point to the DT at all - and you're left with
> > the problem of validating the information that the validator has.
>
> That would be the case if hardware IP only has a single way to be configured.

Right, and if if there's more than one way, then the validator can't
possibly tell whether the DT has the right one.

DTs must always come from a trusted source, because if they don't,
then you don't need the DT in the first place (you could build your
own).

> The industry standard nowadays is picking reusable IP blocks and integrating them
> together in an SoC. The driver and the binding is common for every platform that uses
> that block, but the allowed configuration varies according to what the hardware
> people uses in a given instance.

> > Fundamentally a validator that's useful *cannot* tell the difference
> > between a correct tree and one which _could_ be correct for some
> > theoretical hardware, but isn't for this particular hardware.
>
> Thatâs why thereâs reason for a nested hierarchy of bindings IMO.

Nothing about how you structure the validation can change the basic
fact that there are only two possibilities. Either:

a) You know the hardware structure independent of the DT, in which
case the DT is pointless

or

b) You don't know everything about the hardware without the DT, in
which case you can't know if the DT is right for this hardware

> Completeness of validation schemes can be a differentiating factor when
> choosing parts for hardware design. They would sure cut down development time.
>
>
>
> Regards
>
> â Pantelis
>

--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature