On 12/23/22 17:31, Wesley Cheng wrote:
The QC ADSP is able to support USB playback and capture, so that the
main application processor can be placed into lower CPU power modes. This
adds the required AFE port configurations and port start command to start
an audio session.
It would be good to clarify what sort of endpoints can be supported. I
presume the SOF-synchronous case is handled, but how would you deal with
async endpoints with feedback (be it explicit or implicit)?
Note that it's very hard to make the decision not to support async
endpoints, there are quite a few devices that are exposed as async to
work around an obscure legacy issue on Windows but are really
sof-synchronous endpoints that never ask for any change of pace.
Signed-off-by: Wesley Cheng <quic_wcheng@xxxxxxxxxxx>
---
.../sound/qcom,q6dsp-lpass-ports.h | 1 +
sound/soc/qcom/qdsp6/q6afe-dai.c | 47 +++++
sound/soc/qcom/qdsp6/q6afe.c | 183 ++++++++++++++++++
sound/soc/qcom/qdsp6/q6afe.h | 46 ++++-
sound/soc/qcom/qdsp6/q6dsp-lpass-ports.c | 23 +++
sound/soc/qcom/qdsp6/q6dsp-lpass-ports.h | 1 +
sound/soc/qcom/qdsp6/q6routing.c | 8 +
7 files changed, 308 insertions(+), 1 deletion(-)
diff --git a/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h b/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
index 9f7c5103bc82..746bc462bb2e 100644
--- a/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
+++ b/include/dt-bindings/sound/qcom,q6dsp-lpass-ports.h
@@ -131,6 +131,7 @@
#define RX_CODEC_DMA_RX_7 126
#define QUINARY_MI2S_RX 127
#define QUINARY_MI2S_TX 128
+#define USB_RX 129
the commit message says the DSP can support Playback and capture, but
here there's capture only ...
static const struct snd_soc_dapm_route q6afe_dapm_routes[] = {
+ {"USB Playback", NULL, "USB_RX"},
... but here RX means playback?
I am not sure I get the convention on directions and what is actually
supported?
+struct afe_param_id_usb_cfg {
+/* Minor version used for tracking USB audio device configuration.
+ * Supported values: AFE_API_MINOR_VERSION_USB_AUDIO_CONFIG
+ */
+ u32 cfg_minor_version;
+/* Sampling rate of the port.
+ * Supported values:
+ * - AFE_PORT_SAMPLE_RATE_8K
+ * - AFE_PORT_SAMPLE_RATE_11025
+ * - AFE_PORT_SAMPLE_RATE_12K
+ * - AFE_PORT_SAMPLE_RATE_16K
+ * - AFE_PORT_SAMPLE_RATE_22050
+ * - AFE_PORT_SAMPLE_RATE_24K
+ * - AFE_PORT_SAMPLE_RATE_32K
+ * - AFE_PORT_SAMPLE_RATE_44P1K
+ * - AFE_PORT_SAMPLE_RATE_48K
+ * - AFE_PORT_SAMPLE_RATE_96K
+ * - AFE_PORT_SAMPLE_RATE_192K
+ */
+ u32 sample_rate;
+/* Bit width of the sample.
+ * Supported values: 16, 24
+ */
+ u16 bit_width;
+/* Number of channels.
+ * Supported values: 1 and 2
that aligns with my feedback on the cover letter, if you connect a
device that can support from than 2 channels should the DSP even expose
this DSP-optimized path?
Oh and I forgot, what happens if there are multiple audio devices
connected, can the DSP deal with all of them? If not, how is this handled?
+ */
+ u16 num_channels;
+/* Data format supported by the USB. The supported value is
+ * 0 (#AFE_USB_AUDIO_DATA_FORMAT_LINEAR_PCM).
+ */
+ u16 data_format;
+/* this field must be 0 */
+ u16 reserved;
+/* device token of actual end USB aduio device */
typo: audio
+ u32 dev_token;
+/* endianness of this interface */
+ u32 endian;
Is this a USB concept? I can't recall having seen any parts of the USB
audio class spec that the data can be big or little endian?
+/* service interval */
+ u32 service_interval;
+} __packed;
+int afe_port_send_usb_dev_param(struct q6afe_port *port, struct q6afe_usb_cfg *cfg)
+{
+ union afe_port_config *pcfg = &port->port_cfg;
+ struct afe_param_id_usb_audio_dev_params usb_dev;
+ struct afe_param_id_usb_audio_dev_lpcm_fmt lpcm_fmt;
+ struct afe_param_id_usb_audio_svc_interval svc_int;
+ int ret = 0;
+
+ if (!pcfg) {
+ pr_err("%s: Error, no configuration data\n", __func__);
can you use a dev_err() here and the rest of the code?