On Mon, Oct 01, 2018 at 01:56:32PM -0700, Saravana Kannan wrote:
Assuming we are willing to maintain the bandwidth OPP tables and the
On 09/26/2018 07:34 AM, Jordan Crouse wrote:
On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:Been meaning to send this out for a while, but caught up with other stuff.
On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:This is a discussion I wouldn't mind having now. To jog memories, this is what
This binding is intended to represent the relations between the interconnectAs I mentioned in person, I want to see other SoC families using this
controllers (providers) and consumer device nodes. It will allow creating links
between consumers and interconnect paths (exposed by interconnect providers).
before accepting. They don't have to be ready for upstream, but WIP
patches or even just a "yes, this works for us and we're going to use
this binding on X".
Also, I think the QCom GPU use of this should be fully sorted out. Or
more generically how this fits into OPP binding which seems to be never
ending extended...
I posted a few weeks ago:
https://patchwork.freedesktop.org/patch/246117/
This seems like the easiest way to me to tie the frequency and the bandwidth
quota together for GPU devfreq scaling but I'm not married to the format and
I'll happily go a few rounds on the bikeshed if we can get something we can
be happy with.
Jordan
That GPU BW patch is very specific to device to device mapping and
doesn't work well for different use cases (Eg: those that can
calculate based on use case, etc).
Interconnect paths have different BW (bandwidth) operating points
that they can support. For example: 1 GB/s, 1.7 GB/s, 5GB/s, etc.
Having a mapping from GPU or CPU to those are fine/necessary, but we
still need a separate BW OPP table for interconnect paths to list
what they can actually support.
Two different ways we could represent BW OPP tables for interconnect paths:
1. Represent interconnect paths (CPU to DDR, GPU to DDR, etc) as
devices and have OPPs for those devices.
2. We can have a "interconnect-opp-tables" DT binding similar to
"interconnects" and "interconnect-names". So if a device GPU or
Video decoder or I2C device needs to vote on an interconnect path,
they can also list the OPP tables that those paths can support.
I know Rob doesn't like (1). But I'm hoping at least (2) is
acceptable. I'm open to other suggestions too.
Both (1) and (2) need BW OPP tables similar to frequency OPP tables.
That should be easy to add and Viresh is open to that. I'm open to
other options too, but the fundamental missing part is how to tie a
list of BW OPPs to interconnect paths in DT.
Once we have one of the above two options, we can use the
required-opps field (already present in kernel) for the mapping
between GPU to a particular BW need (suggested by Viresh during an
in person conversation).
names / phandles needed to describe a 1:1 GPU -> bandwidth mapping
I'm okay with required-opps but for the sake of argument how would
required-opps work for a device that needs to vote multiple paths
for a given OPP?