Re: [PATCH v9 2/8] dt-bindings: Introduce interconnect binding

From: Saravana Kannan
Date: Wed Oct 03 2018 - 14:06:52 EST




On 10/03/2018 02:33 AM, Sudeep Holla wrote:
On Tue, Oct 02, 2018 at 11:56:56AM -0700, Saravana Kannan wrote:
On 10/02/2018 04:17 AM, Sudeep Holla wrote:
[...]

Yes, I do understand I have made the same point multiple time and it's
intentional. We need to get the fragmented f/w support story fixed.
Different ARM vendors are doing different things in f/w and ARM sees the
same fragmentation story as before. We have come up with new specification
and my annoying multiple emails are just to constantly remind the same.

I do understand we have existing implementations to consider, but fixing
the functionality in arbitrary way is not a good design and it better
to get them fixed for future.
I believe the fragmentation you are referring to is in the
interface/communication protocol. I see the benefit of standardizing that as
long as the standard actually turns out to be good. But that's completely
separate from what the FW can/can't do. Asking to standardize what the FW
can/can't do doesn't seem realistic as each chip vendor will have different
priorities - power, performance, cost, chip area, etc. It's the conflation
of these separate topics that doesn't help IMHO.
I agree on interface/communication protocol fragmentation and firmware
can implement whatever the vendor wish. What I was also referring was
the mix-n-match approach which should be avoided.

e.g. Device A and B's PM is managed completely by firmware using OSPM hints
Suppose Device X's PM is dependent on Device A and B, in which case it's
simpler and cleaner to leave Device X PM to firmware. Reading the state
of A and B and using that as hint for X is just overhead which firmware
can manage better. That was my main concern here: A=CPU and B=some other
device and X is inter-connect to which A and B are connected.

If CPU OPPs are obtained from f/w and this inter-connect from DT, mapping
then is a mess and that's what I was concerned. I am sorry if that's not
the scenario here, I may have mistaken then.

What you are asking would be an ideal case, but this is not an ideal world. There are tons of constraints for each chip vendor. Saying you can't mix and match makes perfect the enemy of the good. Adding FW support for A and B might make them optimal. But adding support for X might not be possible for multiple real world constraints (chip area, cost, time to market, etc). Saying "either do it all or do nothing" is going to hold back a lot progress that can come in increments. Heck, we do the same thing in the kernel. We'll add basic simple features first and then improve on them. Why is it suddenly frowned up if a FW/HW follows the same approach? I'll just have to agree to disagree with you on this view point.

-Saravana

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project