On 3/25/2025 12:53 PM, Daniel Lezcano wrote:
[ ... ]
+static int __init nxp_stm_clocksource_init(struct device *dev, const
char *name,
+ void __iomem *base, struct clk *clk)
+{
+ struct stm_timer *stm_timer;
+ int ret;
+
+ stm_timer = devm_kzalloc(dev, sizeof(*stm_timer), GFP_KERNEL);
+ if (!stm_timer)
+ return -ENOMEM;
+
+ stm_timer->base = base;
+ stm_timer->rate = clk_get_rate(clk);
+
+ stm_timer->scs.cs.name = name;
+ stm_timer->scs.cs.rating = 460;
+ stm_timer->scs.cs.read = nxp_stm_clocksource_read;
+ stm_timer->scs.cs.enable = nxp_stm_clocksource_enable;
+ stm_timer->scs.cs.disable = nxp_stm_clocksource_disable;
+ stm_timer->scs.cs.suspend = nxp_stm_clocksource_suspend;
+ stm_timer->scs.cs.resume = nxp_stm_clocksource_resume;
+ stm_timer->scs.cs.mask = CLOCKSOURCE_MASK(32);
+ stm_timer->scs.cs.flags = CLOCK_SOURCE_IS_CONTINUOUS;
+
+ ret = clocksource_register_hz(&stm_timer->scs.cs, stm_timer->rate);
+ if (ret)
+ return ret;
clocksource_unregister during remove callback for cleanup?
Sorry I don't get it :/
There is no cleanup after the clocksource_register_hz() is successful
I noticed that other drivers, such as
drivers/clocksource/timer-tegra186.c and
drivers/clocksource/timer-sun5i.c, perform clocksource_unregister during
their platform_driver.remove callback. Shouldn't this apply here as well?
[ ... ]
+static int nxp_stm_clockevent_set_next_event(unsigned long delta,
struct clock_event_device *ced)
+{
+ struct stm_timer *stm_timer = ced_to_stm(ced);
+ u32 val;
+
+ nxp_stm_clockevent_disable(stm_timer);
While examining the code base, I came across the
drivers/clocksource/timer-imx-gpt.c file, specifically the
mx1_2_set_next_event function, which includes a protection against
missing events. Using a similar approach would allow us to keep the STM
module enabled while only altering the channel's register state. This
risk can also be mitigated by adjusting min_delta_ns based on tick
frequency.
How would you validate that ?
How would I validate that this works?
If this is the question, I see that the core performs an auto adjustment
of the minimum delta as part of the clockevents_program_min_delta
function when CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST is enabled.
Initially, I would examine how many times dev->set_next_event() returns
-ETIME. At the end of the function, min_delta_ns should reflect the
actual minimum delta the device can handle.
[ ... ]
+static int __init nxp_stm_timer_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
+ struct stm_instances *stm_instances;
+ const char *name = of_node_full_name(np);
+ void __iomem *base;
+ int irq, ret;
+ struct clk *clk;
+
+ stm_instances =
(typeof(stm_instances))of_device_get_match_data(dev);
+ if (!stm_instances) {
+ dev_err(dev, "No STM instances associated with a cpu");
+ return -EINVAL;
+ }
+
+ base = devm_of_iomap(dev, np, 0, NULL);
+ if (IS_ERR(base)) {
+ dev_err(dev, "Failed to iomap %pOFn\n", np);
+ return PTR_ERR(base);
+ }
+
+ irq = irq_of_parse_and_map(np, 0);
+ if (irq <= 0) {
+ dev_err(dev, "Failed to parse and map IRQ: %d\n", irq);
+ return -EINVAL;
+ }
From commit description:
The first probed STM is used as a clocksource, the second will be the
broadcast timer and the rest are used as a clockevent with the
affinity set to a CPU.
Why is the interrupt mandatory when the node is probed as a clocksource?
It relies on the DT description and it does not hurt to have a
consistent code path for clockevent / clocksource even if the interrupt
is not requested for the clocksource later.
If so, in my opinion, it would make sense to use the same STM instance
for both the clocksource and broadcast clockevent, as both functions can
be accommodated within a single STM instance, which will help reduce the
number of STM instances used.