[GIT pull] timer updates for 4.12

From: Thomas Gleixner
Date: Mon May 01 2017 - 06:44:18 EST


Linus,

please pull the latest timers-core-for-linus git tree from:

git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers-core-for-linus

The timer departement delivers:

- More year 2038 rework
- A massive rework of the arm achitected timer
- Preparatory patches to allow NTP correction of clock event devices to
avoid early expiry
- The usual pile of fixes and enhancements all over the place

Thanks,

tglx

------------------>
Alexander Kochetkov (5):
dt-bindings: Clarify compatible property for rockchip timers
ARM: dts: rockchip: Update compatible property for rk322x timer
clocksource/drivers/rockchip_timer: Implement clocksource timer
ARM: dts: rockchip: Add timer entries to rk3188 SoC
ARM: dts: rockchip: disable arm-global-timer for rk3188

Arnd Bergmann (1):
arm64/arch_timer: Mark errata handlers as __maybe_unused

Arun Raghavan (1):
rlimits: Print more information when CPU/RT limits are exceeded

David Engraf (1):
timers, sched_clock: Update timeout for clock wrap

Deepa Dinamani (7):
time: Delete do_sys_setimeofday()
time: Change posix clocks ops interfaces to use timespec64
time: Change k_clock clock_get() to use timespec64
time: Change k_clock clock_getres() to use timespec64
time: Change k_clock clock_set() to use timespec64
time: Change k_clock timer_set() and timer_get() to use timespec64
time: Change k_clock nsleep() to use timespec64

Fu Wei (16):
clocksource: arm_arch_timer: clean up printk usage
clocksource: arm_arch_timer: rename type macros
clocksource: arm_arch_timer: rename the PPI enum
clocksource: arm_arch_timer: move enums and defines to header file
clocksource: arm_arch_timer: add a new enum for spi type
clocksource: arm_arch_timer: rework PPI selection
clocksource: arm_arch_timer: split dt-only rate handling
clocksource: arm_arch_timer: refactor arch_timer_needs_probing
clocksource: arm_arch_timer: move arch_timer_needs_of_probing into DT init call
clocksource: arm_arch_timer: add structs to describe MMIO timer
clocksource: arm_arch_timer: split MMIO timer probing.
acpi/arm64: Add GTDT table parse driver
clocksource: arm_arch_timer: simplify ACPI support code.
acpi/arm64: Add memory-mapped timer support in GTDT driver
clocksource: arm_arch_timer: add GTDT support for memory-mapped timer
acpi/arm64: Add SBSA Generic Watchdog support in GTDT driver

John Stultz (1):
MAINTAINERS: Add Stephen Boyd as timekeeping reviewer

Linus Walleij (3):
clocksource: Augment bindings for Faraday timer
clocksource/drivers/gemini: Rename Gemini timer to Faraday
clocksource/drivers/fttmr010: Refactor to handle clock

Marc Zyngier (18):
arm64: Allow checking of a CPU-local erratum
arm64: Add CNTVCT_EL0 trap handler
arm64: Define Cortex-A73 MIDR
arm64: cpu_errata: Allow an erratum to be match for all revisions of a core
arm64: cpu_errata: Add capability to advertise Cortex-A73 erratum 858921
arm64: arch_timer: Add infrastructure for multiple erratum detection methods
arm64: arch_timer: Add erratum handler for CPU-specific capability
arm64: arch_timer: Move arch_timer_reg_read/write around
arm64: arch_timer: Get rid of erratum_workaround_set_sne
arm64: arch_timer: Rework the set_next_event workarounds
arm64: arch_timer: Make workaround methods optional
arm64: arch_timer: Allows a CPU-specific erratum to only affect a subset of CPUs
arm64: arch_timer: Move clocksource_counter and co around
arm64: arch_timer: Save cntkctl_el1 as a per-cpu variable
arm64: arch_timer: Enable CNTVCT_EL0 trap if workaround is enabled
arm64: arch_timer: Workaround for Cortex-A73 erratum 858921
arm64: arch_timer: Allow erratum matching with ACPI OEM information
arm64: arch_timer: Add HISILICON_ERRATUM_161010101 ACPI matching data

Matt Redfearn (2):
MIPS/Malta: Probe gic-timer via devicetree
Clocksource/mips-gic: Remove redundant non devicetree init

Matthias Kaehlcke (1):
clocksource: Use GENMASK_ULL in definition of CLOCKSOURCE_MASK

Myungho Jung (1):
timer/sysclt: Restrict timer migration sysctl values to 0 and 1

Nicholas Mc Guire (1):
timekeeping: Remove pointless conversion to bool

Nicolai Stange (28):
clocksource: sh_cmt: Compute rate before registration again
clocksource: sh_tmu: Compute rate before registration again
clocksource: em_sti: Split clock prepare and enable steps
clocksource: em_sti: Compute rate before registration
clocksource: h8300_timer8: Don't reset rate in ->set_state_oneshot()
clockevents: Make clockevents_config() static
x86/xen/time: Set ->min_delta_ticks and ->max_delta_ticks
m68k/coldfire/pit: Set ->min_delta_ticks and ->max_delta_ticks
powerpc/time: Set ->min_delta_ticks and ->max_delta_ticks
sparc/time: Set ->min_delta_ticks and ->max_delta_ticks
x86/lguest/timer: Set ->min_delta_ticks and ->max_delta_ticks
hexagon/time: Set ->min_delta_ticks and ->max_delta_ticks
clockevents/drivers/dw_apb: Set ->min_delta_ticks and ->max_delta_ticks
clockevents/drivers/metag: Set ->min_delta_ticks and ->max_delta_ticks
x86/numachip timer: Set ->min_delta_ticks and ->max_delta_ticks
clockevents/drivers/sh_cmt: Set ->min_delta_ticks and ->max_delta_ticks
clockevents/drivers/atlas7: Set ->min_delta_ticks and ->max_delta_ticks
MIPS: clockevent drivers: Set ->min_delta_ticks and ->max_delta_ticks
x86/apic/timer: Set ->min_delta_ticks and ->max_delta_ticks
blackfin: time-ts: Set ->min_delta_ticks and ->max_delta_ticks
c6x/timer64: Set ->min_delta_ticks and ->max_delta_ticks
mn10300/cevt-mn10300: Set ->min_delta_ticks and ->max_delta_ticks
s390/time: Set ->min_delta_ticks and ->max_delta_ticks
score/time: Set ->min_delta_ticks and ->max_delta_ticks
tile/time: Set ->min_delta_ticks and ->max_delta_ticks
um/time: Set ->min_delta_ticks and ->max_delta_ticks
unicore32/time: Set ->min_delta_ticks and ->max_delta_ticks
x86/uv/time: Set ->min_delta_ticks and ->max_delta_ticks

RafaÅ MiÅecki (1):
clocksource: Add missing line break to error messages

Russell King (2):
clocksource/drivers/orion: Read clock rate once
clocksource/drivers/orion: Add delay_timer implementation

Stephen Boyd (1):
hrtimer: Remove hrtimer_peek_ahead_timers() leftovers

Tom Hromatka (1):
sysrq: Reset the watchdog timers while displaying high-resolution timers


Documentation/arm64/silicon-errata.txt | 1 +
.../bindings/timer/cortina,gemini-timer.txt | 22 -
.../devicetree/bindings/timer/faraday,fttmr010.txt | 33 +
.../bindings/timer/rockchip,rk-timer.txt | 12 +-
MAINTAINERS | 1 +
arch/alpha/kernel/osf_sys.c | 4 +-
arch/arm/boot/dts/rk3188.dtsi | 17 +
arch/arm/boot/dts/rk322x.dtsi | 2 +-
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/arch_timer.h | 43 +-
arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/cputype.h | 2 +
arch/arm64/include/asm/esr.h | 2 +
arch/arm64/kernel/cpu_errata.c | 15 +
arch/arm64/kernel/cpufeature.c | 13 +-
arch/arm64/kernel/traps.c | 14 +
arch/blackfin/kernel/time-ts.c | 4 +
arch/c6x/platforms/timer64.c | 2 +
arch/hexagon/kernel/time.c | 2 +
arch/m68k/coldfire/pit.c | 2 +
arch/mips/alchemy/common/time.c | 4 +-
arch/mips/jz4740/time.c | 2 +
arch/mips/kernel/cevt-bcm1480.c | 2 +
arch/mips/kernel/cevt-ds1287.c | 2 +
arch/mips/kernel/cevt-gt641xx.c | 2 +
arch/mips/kernel/cevt-sb1250.c | 2 +
arch/mips/kernel/cevt-txx9.c | 2 +
arch/mips/loongson32/common/time.c | 2 +
arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c | 2 +
arch/mips/loongson64/loongson-3/hpet.c | 2 +
arch/mips/mti-malta/malta-time.c | 31 +-
arch/mips/ralink/cevt-rt3352.c | 2 +
arch/mips/sgi-ip27/ip27-timer.c | 2 +
arch/mn10300/kernel/cevt-mn10300.c | 2 +
arch/powerpc/kernel/time.c | 2 +
arch/s390/kernel/time.c | 2 +
arch/score/kernel/time.c | 2 +
arch/sparc/kernel/time_32.c | 2 +
arch/sparc/kernel/time_64.c | 2 +
arch/tile/kernel/time.c | 2 +
arch/um/kernel/time.c | 4 +-
arch/unicore32/kernel/time.c | 2 +
arch/x86/kernel/apic/apic.c | 4 +
arch/x86/lguest/boot.c | 2 +
arch/x86/platform/uv/uv_time.c | 2 +
arch/x86/xen/time.c | 4 +
drivers/acpi/arm64/Kconfig | 3 +
drivers/acpi/arm64/Makefile | 1 +
drivers/acpi/arm64/gtdt.c | 417 ++++++++
drivers/char/mmtimer.c | 28 +-
drivers/clocksource/Kconfig | 19 +-
drivers/clocksource/Makefile | 2 +-
drivers/clocksource/arc_timer.c | 14 +-
drivers/clocksource/arm_arch_timer.c | 1108 +++++++++++++-------
drivers/clocksource/asm9260_timer.c | 2 +-
drivers/clocksource/bcm2835_timer.c | 6 +-
drivers/clocksource/bcm_kona_timer.c | 2 +-
drivers/clocksource/clksrc-probe.c | 2 +-
drivers/clocksource/dw_apb_timer.c | 4 +-
drivers/clocksource/em_sti.c | 46 +-
drivers/clocksource/h8300_timer8.c | 8 -
drivers/clocksource/meson6_timer.c | 4 +-
drivers/clocksource/metag_generic.c | 2 +
drivers/clocksource/mips-gic-timer.c | 15 +-
drivers/clocksource/nomadik-mtu.c | 8 +-
drivers/clocksource/numachip.c | 2 +
drivers/clocksource/pxa_timer.c | 6 +-
drivers/clocksource/rockchip_timer.c | 218 ++--
drivers/clocksource/samsung_pwm_timer.c | 6 +-
drivers/clocksource/sh_cmt.c | 47 +-
drivers/clocksource/sh_tmu.c | 26 +-
drivers/clocksource/sun4i_timer.c | 10 +-
drivers/clocksource/tegra20_timer.c | 2 +-
drivers/clocksource/time-armada-370-xp.c | 16 +-
drivers/clocksource/time-efm32.c | 2 +-
drivers/clocksource/time-orion.c | 34 +-
drivers/clocksource/timer-atlas7.c | 2 +
drivers/clocksource/timer-atmel-pit.c | 2 +-
drivers/clocksource/timer-digicolor.c | 6 +-
.../{timer-gemini.c => timer-fttmr010.c} | 164 +--
drivers/clocksource/timer-integrator-ap.c | 4 +-
drivers/clocksource/timer-nps.c | 6 +-
drivers/clocksource/timer-prima2.c | 10 +-
drivers/clocksource/timer-sp804.c | 4 +-
drivers/clocksource/timer-sun5i.c | 6 +-
drivers/clocksource/vf_pit_timer.c | 2 +-
drivers/ptp/ptp_clock.c | 18 +-
include/clocksource/arm_arch_timer.h | 34 +
include/linux/acpi.h | 7 +
include/linux/clockchips.h | 1 -
include/linux/clocksource.h | 2 +-
include/linux/hrtimer.h | 6 +-
include/linux/irqchip/mips-gic.h | 1 -
include/linux/posix-clock.h | 10 +-
include/linux/posix-timers.h | 20 +-
include/linux/timekeeping.h | 20 +-
kernel/compat.c | 10 +-
kernel/sysctl.c | 2 +
kernel/time/alarmtimer.c | 27 +-
kernel/time/clockevents.c | 2 +-
kernel/time/hrtimer.c | 15 +-
kernel/time/posix-clock.c | 10 +-
kernel/time/posix-cpu-timers.c | 75 +-
kernel/time/posix-stubs.c | 20 +-
kernel/time/posix-timers.c | 97 +-
kernel/time/sched_clock.c | 5 +
kernel/time/time.c | 4 +-
kernel/time/timekeeping.c | 3 +-
kernel/time/timer.c | 2 +-
kernel/time/timer_list.c | 6 +
110 files changed, 2086 insertions(+), 861 deletions(-)
delete mode 100644 Documentation/devicetree/bindings/timer/cortina,gemini-timer.txt
create mode 100644 Documentation/devicetree/bindings/timer/faraday,fttmr010.txt
create mode 100644 drivers/acpi/arm64/gtdt.c
rename drivers/clocksource/{timer-gemini.c => timer-fttmr010.c} (72%)

diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
index 2f66683500b8..10f2dddbf449 100644
--- a/Documentation/arm64/silicon-errata.txt
+++ b/Documentation/arm64/silicon-errata.txt
@@ -54,6 +54,7 @@ stable kernels.
| ARM | Cortex-A57 | #852523 | N/A |
| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |
| ARM | Cortex-A72 | #853709 | N/A |
+| ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |
| ARM | MMU-500 | #841119,#826419 | N/A |
| | | | |
| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
diff --git a/Documentation/devicetree/bindings/timer/cortina,gemini-timer.txt b/Documentation/devicetree/bindings/timer/cortina,gemini-timer.txt
deleted file mode 100644
index 16ea1d3b2e9e..000000000000
--- a/Documentation/devicetree/bindings/timer/cortina,gemini-timer.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-Cortina Systems Gemini timer
-
-This timer is embedded in the Cortina Systems Gemini SoCs.
-
-Required properties:
-
-- compatible : Must be "cortina,gemini-timer"
-- reg : Should contain registers location and length
-- interrupts : Should contain the three timer interrupts with
- flags for rising edge
-- syscon : a phandle to the global Gemini system controller
-
-Example:
-
-timer@43000000 {
- compatible = "cortina,gemini-timer";
- reg = <0x43000000 0x1000>;
- interrupts = <14 IRQ_TYPE_EDGE_RISING>, /* Timer 1 */
- <15 IRQ_TYPE_EDGE_RISING>, /* Timer 2 */
- <16 IRQ_TYPE_EDGE_RISING>; /* Timer 3 */
- syscon = <&syscon>;
-};
diff --git a/Documentation/devicetree/bindings/timer/faraday,fttmr010.txt b/Documentation/devicetree/bindings/timer/faraday,fttmr010.txt
new file mode 100644
index 000000000000..b73ca6cd07f8
--- /dev/null
+++ b/Documentation/devicetree/bindings/timer/faraday,fttmr010.txt
@@ -0,0 +1,33 @@
+Faraday Technology timer
+
+This timer is a generic IP block from Faraday Technology, embedded in the
+Cortina Systems Gemini SoCs and other designs.
+
+Required properties:
+
+- compatible : Must be one of
+ "faraday,fttmr010"
+ "cortina,gemini-timer"
+- reg : Should contain registers location and length
+- interrupts : Should contain the three timer interrupts usually with
+ flags for falling edge
+
+Optionally required properties:
+
+- clocks : a clock to provide the tick rate for "faraday,fttmr010"
+- clock-names : should be "EXTCLK" and "PCLK" for the external tick timer
+ and peripheral clock respectively, for "faraday,fttmr010"
+- syscon : a phandle to the global Gemini system controller if the compatible
+ type is "cortina,gemini-timer"
+
+Example:
+
+timer@43000000 {
+ compatible = "faraday,fttmr010";
+ reg = <0x43000000 0x1000>;
+ interrupts = <14 IRQ_TYPE_EDGE_FALLING>, /* Timer 1 */
+ <15 IRQ_TYPE_EDGE_FALLING>, /* Timer 2 */
+ <16 IRQ_TYPE_EDGE_FALLING>; /* Timer 3 */
+ clocks = <&extclk>, <&pclk>;
+ clock-names = "EXTCLK", "PCLK";
+};
diff --git a/Documentation/devicetree/bindings/timer/rockchip,rk-timer.txt b/Documentation/devicetree/bindings/timer/rockchip,rk-timer.txt
index a41b184d5538..16a5f4577a61 100644
--- a/Documentation/devicetree/bindings/timer/rockchip,rk-timer.txt
+++ b/Documentation/devicetree/bindings/timer/rockchip,rk-timer.txt
@@ -1,9 +1,15 @@
Rockchip rk timer

Required properties:
-- compatible: shall be one of:
- "rockchip,rk3288-timer" - for rk3066, rk3036, rk3188, rk322x, rk3288, rk3368
- "rockchip,rk3399-timer" - for rk3399
+- compatible: should be:
+ "rockchip,rk3036-timer", "rockchip,rk3288-timer": for Rockchip RK3036
+ "rockchip,rk3066-timer", "rockchip,rk3288-timer": for Rockchip RK3066
+ "rockchip,rk3188-timer", "rockchip,rk3288-timer": for Rockchip RK3188
+ "rockchip,rk3228-timer", "rockchip,rk3288-timer": for Rockchip RK3228
+ "rockchip,rk3229-timer", "rockchip,rk3288-timer": for Rockchip RK3229
+ "rockchip,rk3288-timer": for Rockchip RK3288
+ "rockchip,rk3368-timer", "rockchip,rk3288-timer": for Rockchip RK3368
+ "rockchip,rk3399-timer": for Rockchip RK3399
- reg: base address of the timer register starting with TIMERS CONTROL register
- interrupts: should contain the interrupts for Timer0
- clocks : must contain an entry for each entry in clock-names
diff --git a/MAINTAINERS b/MAINTAINERS
index c776906f67a9..68b1a1492e96 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11099,6 +11099,7 @@ F: drivers/power/supply/bq27xxx_battery_i2c.c
TIMEKEEPING, CLOCKSOURCE CORE, NTP, ALARMTIMER
M: John Stultz <john.stultz@xxxxxxxxxx>
M: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
+R: Stephen Boyd <sboyd@xxxxxxxxxxxxxx>
L: linux-kernel@xxxxxxxxxxxxxxx
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core
S: Supported
diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
index 0b961093ca5c..9de47a9c6df2 100644
--- a/arch/alpha/kernel/osf_sys.c
+++ b/arch/alpha/kernel/osf_sys.c
@@ -1016,6 +1016,7 @@ SYSCALL_DEFINE2(osf_gettimeofday, struct timeval32 __user *, tv,
SYSCALL_DEFINE2(osf_settimeofday, struct timeval32 __user *, tv,
struct timezone __user *, tz)
{
+ struct timespec64 kts64;
struct timespec kts;
struct timezone ktz;

@@ -1023,13 +1024,14 @@ SYSCALL_DEFINE2(osf_settimeofday, struct timeval32 __user *, tv,
if (get_tv32((struct timeval *)&kts, tv))
return -EFAULT;
kts.tv_nsec *= 1000;
+ kts64 = timespec_to_timespec64(kts);
}
if (tz) {
if (copy_from_user(&ktz, tz, sizeof(*tz)))
return -EFAULT;
}

- return do_sys_settimeofday(tv ? &kts : NULL, tz ? &ktz : NULL);
+ return do_sys_settimeofday64(tv ? &kts64 : NULL, tz ? &ktz : NULL);
}

asmlinkage long sys_ni_posix_timers(void);
diff --git a/arch/arm/boot/dts/rk3188.dtsi b/arch/arm/boot/dts/rk3188.dtsi
index cf91254d0a43..1aff4ad22fc4 100644
--- a/arch/arm/boot/dts/rk3188.dtsi
+++ b/arch/arm/boot/dts/rk3188.dtsi
@@ -106,6 +106,22 @@
};
};

+ timer3: timer@2000e000 {
+ compatible = "rockchip,rk3188-timer", "rockchip,rk3288-timer";
+ reg = <0x2000e000 0x20>;
+ interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru SCLK_TIMER3>, <&cru PCLK_TIMER3>;
+ clock-names = "timer", "pclk";
+ };
+
+ timer6: timer@200380a0 {
+ compatible = "rockchip,rk3188-timer", "rockchip,rk3288-timer";
+ reg = <0x200380a0 0x20>;
+ interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru SCLK_TIMER6>, <&cru PCLK_TIMER0>;
+ clock-names = "timer", "pclk";
+ };
+
i2s0: i2s@1011a000 {
compatible = "rockchip,rk3188-i2s", "rockchip,rk3066-i2s";
reg = <0x1011a000 0x2000>;
@@ -530,6 +546,7 @@

&global_timer {
interrupts = <GIC_PPI 11 0xf04>;
+ status = "disabled";
};

&local_timer {
diff --git a/arch/arm/boot/dts/rk322x.dtsi b/arch/arm/boot/dts/rk322x.dtsi
index 9dff8221112c..641607d9ad29 100644
--- a/arch/arm/boot/dts/rk322x.dtsi
+++ b/arch/arm/boot/dts/rk322x.dtsi
@@ -325,7 +325,7 @@
};

timer: timer@110c0000 {
- compatible = "rockchip,rk3288-timer";
+ compatible = "rockchip,rk3228-timer", "rockchip,rk3288-timer";
reg = <0x110c0000 0x20>;
interrupts = <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&xin24m>, <&cru PCLK_TIMER>;
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3741859765cf..7e2baec6f23a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2,6 +2,7 @@ config ARM64
def_bool y
select ACPI_CCA_REQUIRED if ACPI
select ACPI_GENERIC_GSI if ACPI
+ select ACPI_GTDT if ACPI
select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ACPI_MCFG if ACPI
select ACPI_SPCR_TABLE if ACPI
diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h
index b4b34004a21e..74d08e44a651 100644
--- a/arch/arm64/include/asm/arch_timer.h
+++ b/arch/arm64/include/asm/arch_timer.h
@@ -25,6 +25,7 @@
#include <linux/bug.h>
#include <linux/init.h>
#include <linux/jump_label.h>
+#include <linux/smp.h>
#include <linux/types.h>

#include <clocksource/arm_arch_timer.h>
@@ -37,24 +38,44 @@ extern struct static_key_false arch_timer_read_ool_enabled;
#define needs_unstable_timer_counter_workaround() false
#endif

+enum arch_timer_erratum_match_type {
+ ate_match_dt,
+ ate_match_local_cap_id,
+ ate_match_acpi_oem_info,
+};
+
+struct clock_event_device;

struct arch_timer_erratum_workaround {
- const char *id; /* Indicate the Erratum ID */
+ enum arch_timer_erratum_match_type match_type;
+ const void *id;
+ const char *desc;
u32 (*read_cntp_tval_el0)(void);
u32 (*read_cntv_tval_el0)(void);
u64 (*read_cntvct_el0)(void);
+ int (*set_next_event_phys)(unsigned long, struct clock_event_device *);
+ int (*set_next_event_virt)(unsigned long, struct clock_event_device *);
};

-extern const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround;
-
-#define arch_timer_reg_read_stable(reg) \
-({ \
- u64 _val; \
- if (needs_unstable_timer_counter_workaround()) \
- _val = timer_unstable_counter_workaround->read_##reg();\
- else \
- _val = read_sysreg(reg); \
- _val; \
+DECLARE_PER_CPU(const struct arch_timer_erratum_workaround *,
+ timer_unstable_counter_workaround);
+
+#define arch_timer_reg_read_stable(reg) \
+({ \
+ u64 _val; \
+ if (needs_unstable_timer_counter_workaround()) { \
+ const struct arch_timer_erratum_workaround *wa; \
+ preempt_disable(); \
+ wa = __this_cpu_read(timer_unstable_counter_workaround); \
+ if (wa && wa->read_##reg) \
+ _val = wa->read_##reg(); \
+ else \
+ _val = read_sysreg(reg); \
+ preempt_enable(); \
+ } else { \
+ _val = read_sysreg(reg); \
+ } \
+ _val; \
})

/*
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index fb78a5d3b60b..b3aab8a17868 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -37,7 +37,8 @@
#define ARM64_HAS_NO_FPSIMD 16
#define ARM64_WORKAROUND_REPEAT_TLBI 17
#define ARM64_WORKAROUND_QCOM_FALKOR_E1003 18
+#define ARM64_WORKAROUND_858921 19

-#define ARM64_NCAPS 19
+#define ARM64_NCAPS 20

#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index fc502713ab37..0984d1b3a8f2 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -80,6 +80,7 @@
#define ARM_CPU_PART_FOUNDATION 0xD00
#define ARM_CPU_PART_CORTEX_A57 0xD07
#define ARM_CPU_PART_CORTEX_A53 0xD03
+#define ARM_CPU_PART_CORTEX_A73 0xD09

#define APM_CPU_PART_POTENZA 0x000

@@ -92,6 +93,7 @@

#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
+#define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index d14c478976d0..ad42e79a5d4d 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -175,6 +175,8 @@
#define ESR_ELx_SYS64_ISS_SYS_CTR_READ (ESR_ELx_SYS64_ISS_SYS_CTR | \
ESR_ELx_SYS64_ISS_DIR_READ)

+#define ESR_ELx_SYS64_ISS_SYS_CNTVCT (ESR_ELx_SYS64_ISS_SYS_VAL(3, 3, 2, 14, 0) | \
+ ESR_ELx_SYS64_ISS_DIR_READ)
#ifndef __ASSEMBLY__
#include <asm/types.h>

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index f6cc67e7626e..2ed2a7657711 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -53,6 +53,13 @@ static int cpu_enable_trap_ctr_access(void *__unused)
.midr_range_min = min, \
.midr_range_max = max

+#define MIDR_ALL_VERSIONS(model) \
+ .def_scope = SCOPE_LOCAL_CPU, \
+ .matches = is_affected_midr_range, \
+ .midr_model = model, \
+ .midr_range_min = 0, \
+ .midr_range_max = (MIDR_VARIANT_MASK | MIDR_REVISION_MASK)
+
const struct arm64_cpu_capabilities arm64_errata[] = {
#if defined(CONFIG_ARM64_ERRATUM_826319) || \
defined(CONFIG_ARM64_ERRATUM_827319) || \
@@ -151,6 +158,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
MIDR_CPU_VAR_REV(0, 0)),
},
#endif
+#ifdef CONFIG_ARM64_ERRATUM_858921
+ {
+ /* Cortex-A73 all versions */
+ .desc = "ARM erratum 858921",
+ .capability = ARM64_WORKAROUND_858921,
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+ },
+#endif
{
}
};
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index abda8e861865..6eb77ae99b79 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1090,20 +1090,29 @@ static void __init setup_feature_capabilities(void)
* Check if the current CPU has a given feature capability.
* Should be called from non-preemptible context.
*/
-bool this_cpu_has_cap(unsigned int cap)
+static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
+ unsigned int cap)
{
const struct arm64_cpu_capabilities *caps;

if (WARN_ON(preemptible()))
return false;

- for (caps = arm64_features; caps->desc; caps++)
+ for (caps = cap_array; caps->desc; caps++)
if (caps->capability == cap && caps->matches)
return caps->matches(caps, SCOPE_LOCAL_CPU);

return false;
}

+extern const struct arm64_cpu_capabilities arm64_errata[];
+
+bool this_cpu_has_cap(unsigned int cap)
+{
+ return (__this_cpu_has_cap(arm64_features, cap) ||
+ __this_cpu_has_cap(arm64_errata, cap));
+}
+
void __init setup_cpu_features(void)
{
u32 cwg;
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index e52be6aa44ee..1de444e6c669 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -505,6 +505,14 @@ static void ctr_read_handler(unsigned int esr, struct pt_regs *regs)
regs->pc += 4;
}

+static void cntvct_read_handler(unsigned int esr, struct pt_regs *regs)
+{
+ int rt = (esr & ESR_ELx_SYS64_ISS_RT_MASK) >> ESR_ELx_SYS64_ISS_RT_SHIFT;
+
+ pt_regs_write_reg(regs, rt, arch_counter_get_cntvct());
+ regs->pc += 4;
+}
+
struct sys64_hook {
unsigned int esr_mask;
unsigned int esr_val;
@@ -523,6 +531,12 @@ static struct sys64_hook sys64_hooks[] = {
.esr_val = ESR_ELx_SYS64_ISS_SYS_CTR_READ,
.handler = ctr_read_handler,
},
+ {
+ /* Trap read access to CNTVCT_EL0 */
+ .esr_mask = ESR_ELx_SYS64_ISS_SYS_OP_MASK,
+ .esr_val = ESR_ELx_SYS64_ISS_SYS_CNTVCT,
+ .handler = cntvct_read_handler,
+ },
{},
};

diff --git a/arch/blackfin/kernel/time-ts.c b/arch/blackfin/kernel/time-ts.c
index 0e9fcf841d67..01350557fbd7 100644
--- a/arch/blackfin/kernel/time-ts.c
+++ b/arch/blackfin/kernel/time-ts.c
@@ -230,7 +230,9 @@ static void __init bfin_gptmr0_clockevent_init(struct clock_event_device *evt)
clock_tick = get_sclk();
evt->mult = div_sc(clock_tick, NSEC_PER_SEC, evt->shift);
evt->max_delta_ns = clockevent_delta2ns(-1, evt);
+ evt->max_delta_ticks = (unsigned long)-1;
evt->min_delta_ns = clockevent_delta2ns(100, evt);
+ evt->min_delta_ticks = 100;

evt->cpumask = cpumask_of(0);

@@ -344,7 +346,9 @@ void bfin_coretmr_clockevent_init(void)
clock_tick = get_cclk() / TIME_SCALE;
evt->mult = div_sc(clock_tick, NSEC_PER_SEC, evt->shift);
evt->max_delta_ns = clockevent_delta2ns(-1, evt);
+ evt->max_delta_ticks = (unsigned long)-1;
evt->min_delta_ns = clockevent_delta2ns(100, evt);
+ evt->min_delta_ticks = 100;

evt->cpumask = cpumask_of(cpu);

diff --git a/arch/c6x/platforms/timer64.c b/arch/c6x/platforms/timer64.c
index c19901e5f055..0bd0452ded80 100644
--- a/arch/c6x/platforms/timer64.c
+++ b/arch/c6x/platforms/timer64.c
@@ -234,7 +234,9 @@ void __init timer64_init(void)
clockevents_calc_mult_shift(cd, c6x_core_freq / TIMER_DIVISOR, 5);

cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd);
+ cd->max_delta_ticks = 0x7fffffff;
cd->min_delta_ns = clockevent_delta2ns(250, cd);
+ cd->min_delta_ticks = 250;

cd->cpumask = cpumask_of(smp_processor_id());

diff --git a/arch/hexagon/kernel/time.c b/arch/hexagon/kernel/time.c
index ff4e9bf995e9..29b1f57116c8 100644
--- a/arch/hexagon/kernel/time.c
+++ b/arch/hexagon/kernel/time.c
@@ -199,7 +199,9 @@ void __init time_init_deferred(void)
clockevents_calc_mult_shift(ce_dev, sleep_clk_freq, 4);

ce_dev->max_delta_ns = clockevent_delta2ns(0x7fffffff, ce_dev);
+ ce_dev->max_delta_ticks = 0x7fffffff;
ce_dev->min_delta_ns = clockevent_delta2ns(0xf, ce_dev);
+ ce_dev->min_delta_ticks = 0xf;

#ifdef CONFIG_SMP
setup_percpu_clockdev();
diff --git a/arch/m68k/coldfire/pit.c b/arch/m68k/coldfire/pit.c
index 175553d5b8ed..6c0878018b44 100644
--- a/arch/m68k/coldfire/pit.c
+++ b/arch/m68k/coldfire/pit.c
@@ -149,8 +149,10 @@ void hw_timer_init(irq_handler_t handler)
cf_pit_clockevent.mult = div_sc(FREQ, NSEC_PER_SEC, 32);
cf_pit_clockevent.max_delta_ns =
clockevent_delta2ns(0xFFFF, &cf_pit_clockevent);
+ cf_pit_clockevent.max_delta_ticks = 0xFFFF;
cf_pit_clockevent.min_delta_ns =
clockevent_delta2ns(0x3f, &cf_pit_clockevent);
+ cf_pit_clockevent.min_delta_ticks = 0x3f;
clockevents_register_device(&cf_pit_clockevent);

setup_irq(MCF_IRQ_PIT1, &pit_irq);
diff --git a/arch/mips/alchemy/common/time.c b/arch/mips/alchemy/common/time.c
index e1bec5a77c39..32d1333bb243 100644
--- a/arch/mips/alchemy/common/time.c
+++ b/arch/mips/alchemy/common/time.c
@@ -138,7 +138,9 @@ static int __init alchemy_time_init(unsigned int m2int)
cd->shift = 32;
cd->mult = div_sc(32768, NSEC_PER_SEC, cd->shift);
cd->max_delta_ns = clockevent_delta2ns(0xffffffff, cd);
- cd->min_delta_ns = clockevent_delta2ns(9, cd); /* ~0.28ms */
+ cd->max_delta_ticks = 0xffffffff;
+ cd->min_delta_ns = clockevent_delta2ns(9, cd);
+ cd->min_delta_ticks = 9; /* ~0.28ms */
clockevents_register_device(cd);
setup_irq(m2int, &au1x_rtcmatch2_irqaction);

diff --git a/arch/mips/jz4740/time.c b/arch/mips/jz4740/time.c
index bcf8f8c62737..bb1ad5119da4 100644
--- a/arch/mips/jz4740/time.c
+++ b/arch/mips/jz4740/time.c
@@ -145,7 +145,9 @@ void __init plat_time_init(void)

clockevent_set_clock(&jz4740_clockevent, clk_rate);
jz4740_clockevent.min_delta_ns = clockevent_delta2ns(100, &jz4740_clockevent);
+ jz4740_clockevent.min_delta_ticks = 100;
jz4740_clockevent.max_delta_ns = clockevent_delta2ns(0xffff, &jz4740_clockevent);
+ jz4740_clockevent.max_delta_ticks = 0xffff;
jz4740_clockevent.cpumask = cpumask_of(0);

clockevents_register_device(&jz4740_clockevent);
diff --git a/arch/mips/kernel/cevt-bcm1480.c b/arch/mips/kernel/cevt-bcm1480.c
index 940ac00e9129..8f9f2daf06a3 100644
--- a/arch/mips/kernel/cevt-bcm1480.c
+++ b/arch/mips/kernel/cevt-bcm1480.c
@@ -123,7 +123,9 @@ void sb1480_clockevent_init(void)
CLOCK_EVT_FEAT_ONESHOT;
clockevent_set_clock(cd, V_SCD_TIMER_FREQ);
cd->max_delta_ns = clockevent_delta2ns(0x7fffff, cd);
+ cd->max_delta_ticks = 0x7fffff;
cd->min_delta_ns = clockevent_delta2ns(2, cd);
+ cd->min_delta_ticks = 2;
cd->rating = 200;
cd->irq = irq;
cd->cpumask = cpumask_of(cpu);
diff --git a/arch/mips/kernel/cevt-ds1287.c b/arch/mips/kernel/cevt-ds1287.c
index 77a5ddf53f57..61ad9079fa16 100644
--- a/arch/mips/kernel/cevt-ds1287.c
+++ b/arch/mips/kernel/cevt-ds1287.c
@@ -128,7 +128,9 @@ int __init ds1287_clockevent_init(int irq)
cd->irq = irq;
clockevent_set_clock(cd, 32768);
cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd);
+ cd->max_delta_ticks = 0x7fffffff;
cd->min_delta_ns = clockevent_delta2ns(0x300, cd);
+ cd->min_delta_ticks = 0x300;
cd->cpumask = cpumask_of(0);

clockevents_register_device(&ds1287_clockevent);
diff --git a/arch/mips/kernel/cevt-gt641xx.c b/arch/mips/kernel/cevt-gt641xx.c
index 66040051151d..fd90c82dc17d 100644
--- a/arch/mips/kernel/cevt-gt641xx.c
+++ b/arch/mips/kernel/cevt-gt641xx.c
@@ -152,7 +152,9 @@ static int __init gt641xx_timer0_clockevent_init(void)
cd->rating = 200 + gt641xx_base_clock / 10000000;
clockevent_set_clock(cd, gt641xx_base_clock);
cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd);
+ cd->max_delta_ticks = 0x7fffffff;
cd->min_delta_ns = clockevent_delta2ns(0x300, cd);
+ cd->min_delta_ticks = 0x300;
cd->cpumask = cpumask_of(0);

clockevents_register_device(&gt641xx_timer0_clockevent);
diff --git a/arch/mips/kernel/cevt-sb1250.c b/arch/mips/kernel/cevt-sb1250.c
index 3d860efd63b9..9d1edb5938b8 100644
--- a/arch/mips/kernel/cevt-sb1250.c
+++ b/arch/mips/kernel/cevt-sb1250.c
@@ -123,7 +123,9 @@ void sb1250_clockevent_init(void)
CLOCK_EVT_FEAT_ONESHOT;
clockevent_set_clock(cd, V_SCD_TIMER_FREQ);
cd->max_delta_ns = clockevent_delta2ns(0x7fffff, cd);
+ cd->max_delta_ticks = 0x7fffff;
cd->min_delta_ns = clockevent_delta2ns(2, cd);
+ cd->min_delta_ticks = 2;
cd->rating = 200;
cd->irq = irq;
cd->cpumask = cpumask_of(cpu);
diff --git a/arch/mips/kernel/cevt-txx9.c b/arch/mips/kernel/cevt-txx9.c
index aaca60d6ffc3..7b17c8f5009d 100644
--- a/arch/mips/kernel/cevt-txx9.c
+++ b/arch/mips/kernel/cevt-txx9.c
@@ -196,7 +196,9 @@ void __init txx9_clockevent_init(unsigned long baseaddr, int irq,
clockevent_set_clock(cd, TIMER_CLK(imbusclk));
cd->max_delta_ns =
clockevent_delta2ns(0xffffffff >> (32 - TXX9_TIMER_BITS), cd);
+ cd->max_delta_ticks = 0xffffffff >> (32 - TXX9_TIMER_BITS);
cd->min_delta_ns = clockevent_delta2ns(0xf, cd);
+ cd->min_delta_ticks = 0xf;
cd->irq = irq;
cd->cpumask = cpumask_of(0),
clockevents_register_device(cd);
diff --git a/arch/mips/loongson32/common/time.c b/arch/mips/loongson32/common/time.c
index e6f972d35252..1c4332a26cf1 100644
--- a/arch/mips/loongson32/common/time.c
+++ b/arch/mips/loongson32/common/time.c
@@ -199,7 +199,9 @@ static void __init ls1x_time_init(void)

clockevent_set_clock(cd, mips_hpt_frequency);
cd->max_delta_ns = clockevent_delta2ns(0xffffff, cd);
+ cd->max_delta_ticks = 0xffffff;
cd->min_delta_ns = clockevent_delta2ns(0x000300, cd);
+ cd->min_delta_ticks = 0x000300;
cd->cpumask = cpumask_of(smp_processor_id());
clockevents_register_device(cd);

diff --git a/arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c b/arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c
index b817d6d3a060..a6adcc4f8960 100644
--- a/arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c
+++ b/arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c
@@ -123,7 +123,9 @@ void __init setup_mfgpt0_timer(void)
cd->cpumask = cpumask_of(cpu);
clockevent_set_clock(cd, MFGPT_TICK_RATE);
cd->max_delta_ns = clockevent_delta2ns(0xffff, cd);
+ cd->max_delta_ticks = 0xffff;
cd->min_delta_ns = clockevent_delta2ns(0xf, cd);
+ cd->min_delta_ticks = 0xf;

/* Enable MFGPT0 Comparator 2 Output to the Interrupt Mapper */
_wrmsr(DIVIL_MSR_REG(MFGPT_IRQ), 0, 0x100);
diff --git a/arch/mips/loongson64/loongson-3/hpet.c b/arch/mips/loongson64/loongson-3/hpet.c
index 24afe364637b..4df9d4b7356a 100644
--- a/arch/mips/loongson64/loongson-3/hpet.c
+++ b/arch/mips/loongson64/loongson-3/hpet.c
@@ -241,7 +241,9 @@ void __init setup_hpet_timer(void)
cd->cpumask = cpumask_of(cpu);
clockevent_set_clock(cd, HPET_FREQ);
cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd);
+ cd->max_delta_ticks = 0x7fffffff;
cd->min_delta_ns = clockevent_delta2ns(HPET_MIN_PROG_DELTA, cd);
+ cd->min_delta_ticks = HPET_MIN_PROG_DELTA;

clockevents_register_device(cd);
setup_irq(HPET_T0_IRQ, &hpet_irq);
diff --git a/arch/mips/mti-malta/malta-time.c b/arch/mips/mti-malta/malta-time.c
index 1829a9031eec..289edcfadd7c 100644
--- a/arch/mips/mti-malta/malta-time.c
+++ b/arch/mips/mti-malta/malta-time.c
@@ -21,6 +21,7 @@
#include <linux/i8253.h>
#include <linux/init.h>
#include <linux/kernel_stat.h>
+#include <linux/libfdt.h>
#include <linux/math64.h>
#include <linux/sched.h>
#include <linux/spinlock.h>
@@ -207,6 +208,33 @@ static void __init init_rtc(void)
CMOS_WRITE(ctrl & ~RTC_SET, RTC_CONTROL);
}

+#ifdef CONFIG_CLKSRC_MIPS_GIC
+static u32 gic_frequency_dt;
+
+static struct property gic_frequency_prop = {
+ .name = "clock-frequency",
+ .length = sizeof(u32),
+ .value = &gic_frequency_dt,
+};
+
+static void update_gic_frequency_dt(void)
+{
+ struct device_node *node;
+
+ gic_frequency_dt = cpu_to_be32(gic_frequency);
+
+ node = of_find_compatible_node(NULL, NULL, "mti,gic-timer");
+ if (!node) {
+ pr_err("mti,gic-timer device node not found\n");
+ return;
+ }
+
+ if (of_update_property(node, &gic_frequency_prop) < 0)
+ pr_err("error updating gic frequency property\n");
+}
+
+#endif
+
void __init plat_time_init(void)
{
unsigned int prid = read_c0_prid() & (PRID_COMP_MASK | PRID_IMP_MASK);
@@ -236,7 +264,8 @@ void __init plat_time_init(void)
printk("GIC frequency %d.%02d MHz\n", freq/1000000,
(freq%1000000)*100/1000000);
#ifdef CONFIG_CLKSRC_MIPS_GIC
- gic_clocksource_init(gic_frequency);
+ update_gic_frequency_dt();
+ clocksource_probe();
#endif
}
#endif
diff --git a/arch/mips/ralink/cevt-rt3352.c b/arch/mips/ralink/cevt-rt3352.c
index f24eee04e16a..b8a1376165b0 100644
--- a/arch/mips/ralink/cevt-rt3352.c
+++ b/arch/mips/ralink/cevt-rt3352.c
@@ -129,7 +129,9 @@ static int __init ralink_systick_init(struct device_node *np)
systick.dev.name = np->name;
clockevents_calc_mult_shift(&systick.dev, SYSTICK_FREQ, 60);
systick.dev.max_delta_ns = clockevent_delta2ns(0x7fff, &systick.dev);
+ systick.dev.max_delta_ticks = 0x7fff;
systick.dev.min_delta_ns = clockevent_delta2ns(0x3, &systick.dev);
+ systick.dev.min_delta_ticks = 0x3;
systick.dev.irq = irq_of_parse_and_map(np, 0);
if (!systick.dev.irq) {
pr_err("%s: request_irq failed", np->name);
diff --git a/arch/mips/sgi-ip27/ip27-timer.c b/arch/mips/sgi-ip27/ip27-timer.c
index 695c51bdd7dc..a53f0c8c901e 100644
--- a/arch/mips/sgi-ip27/ip27-timer.c
+++ b/arch/mips/sgi-ip27/ip27-timer.c
@@ -113,7 +113,9 @@ void hub_rt_clock_event_init(void)
cd->features = CLOCK_EVT_FEAT_ONESHOT;
clockevent_set_clock(cd, CYCLES_PER_SEC);
cd->max_delta_ns = clockevent_delta2ns(0xfffffffffffff, cd);
+ cd->max_delta_ticks = 0xfffffffffffff;
cd->min_delta_ns = clockevent_delta2ns(0x300, cd);
+ cd->min_delta_ticks = 0x300;
cd->rating = 200;
cd->irq = irq;
cd->cpumask = cpumask_of(cpu);
diff --git a/arch/mn10300/kernel/cevt-mn10300.c b/arch/mn10300/kernel/cevt-mn10300.c
index d9b34dd44f04..2b21bbc9efa4 100644
--- a/arch/mn10300/kernel/cevt-mn10300.c
+++ b/arch/mn10300/kernel/cevt-mn10300.c
@@ -98,7 +98,9 @@ int __init init_clockevents(void)

/* Calculate the min / max delta */
cd->max_delta_ns = clockevent_delta2ns(TMJCBR_MAX, cd);
+ cd->max_delta_ticks = TMJCBR_MAX;
cd->min_delta_ns = clockevent_delta2ns(100, cd);
+ cd->min_delta_ticks = 100;

cd->rating = 200;
cd->cpumask = cpumask_of(smp_processor_id());
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 07b90725855e..2b33cfaac7b8 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -995,8 +995,10 @@ static void __init init_decrementer_clockevent(void)

decrementer_clockevent.max_delta_ns =
clockevent_delta2ns(decrementer_max, &decrementer_clockevent);
+ decrementer_clockevent.max_delta_ticks = decrementer_max;
decrementer_clockevent.min_delta_ns =
clockevent_delta2ns(2, &decrementer_clockevent);
+ decrementer_clockevent.min_delta_ticks = 2;

register_decrementer_clockevent(cpu);
}
diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
index c31da46bc037..c3a52f9a69a0 100644
--- a/arch/s390/kernel/time.c
+++ b/arch/s390/kernel/time.c
@@ -158,7 +158,9 @@ void init_cpu_timer(void)
cd->mult = 16777;
cd->shift = 12;
cd->min_delta_ns = 1;
+ cd->min_delta_ticks = 1;
cd->max_delta_ns = LONG_MAX;
+ cd->max_delta_ticks = ULONG_MAX;
cd->rating = 400;
cd->cpumask = cpumask_of(cpu);
cd->set_next_event = s390_next_event;
diff --git a/arch/score/kernel/time.c b/arch/score/kernel/time.c
index 679b8d7b0350..29aafc741f69 100644
--- a/arch/score/kernel/time.c
+++ b/arch/score/kernel/time.c
@@ -81,8 +81,10 @@ void __init time_init(void)
score_clockevent.shift);
score_clockevent.max_delta_ns = clockevent_delta2ns((u32)~0,
&score_clockevent);
+ score_clockevent.max_delta_ticks = (u32)~0;
score_clockevent.min_delta_ns = clockevent_delta2ns(50,
&score_clockevent) + 1;
+ score_clockevent.min_delta_ticks = 50;
score_clockevent.cpumask = cpumask_of(0);
clockevents_register_device(&score_clockevent);
}
diff --git a/arch/sparc/kernel/time_32.c b/arch/sparc/kernel/time_32.c
index 244062bdaa56..9f575dfc2e41 100644
--- a/arch/sparc/kernel/time_32.c
+++ b/arch/sparc/kernel/time_32.c
@@ -228,7 +228,9 @@ void register_percpu_ce(int cpu)
ce->mult = div_sc(sparc_config.clock_rate, NSEC_PER_SEC,
ce->shift);
ce->max_delta_ns = clockevent_delta2ns(sparc_config.clock_rate, ce);
+ ce->max_delta_ticks = (unsigned long)sparc_config.clock_rate;
ce->min_delta_ns = clockevent_delta2ns(100, ce);
+ ce->min_delta_ticks = 100;

clockevents_register_device(ce);
}
diff --git a/arch/sparc/kernel/time_64.c b/arch/sparc/kernel/time_64.c
index 12a6d3555cb8..98d05de8da66 100644
--- a/arch/sparc/kernel/time_64.c
+++ b/arch/sparc/kernel/time_64.c
@@ -796,8 +796,10 @@ void __init time_init(void)

sparc64_clockevent.max_delta_ns =
clockevent_delta2ns(0x7fffffffffffffffUL, &sparc64_clockevent);
+ sparc64_clockevent.max_delta_ticks = 0x7fffffffffffffffUL;
sparc64_clockevent.min_delta_ns =
clockevent_delta2ns(0xF, &sparc64_clockevent);
+ sparc64_clockevent.min_delta_ticks = 0xF;

printk("clockevent: mult[%x] shift[%d]\n",
sparc64_clockevent.mult, sparc64_clockevent.shift);
diff --git a/arch/tile/kernel/time.c b/arch/tile/kernel/time.c
index 5bd4e88c7c60..6643ffbc0615 100644
--- a/arch/tile/kernel/time.c
+++ b/arch/tile/kernel/time.c
@@ -155,6 +155,8 @@ static DEFINE_PER_CPU(struct clock_event_device, tile_timer) = {
.name = "tile timer",
.features = CLOCK_EVT_FEAT_ONESHOT,
.min_delta_ns = 1000,
+ .min_delta_ticks = 1,
+ .max_delta_ticks = MAX_TICK,
.rating = 100,
.irq = -1,
.set_next_event = tile_timer_set_next_event,
diff --git a/arch/um/kernel/time.c b/arch/um/kernel/time.c
index ba87a27d6715..0b034ebbda2a 100644
--- a/arch/um/kernel/time.c
+++ b/arch/um/kernel/time.c
@@ -65,7 +65,9 @@ static struct clock_event_device timer_clockevent = {
.set_next_event = itimer_next_event,
.shift = 0,
.max_delta_ns = 0xffffffff,
- .min_delta_ns = TIMER_MIN_DELTA, //microsecond resolution should be enough for anyone, same as 640K RAM
+ .max_delta_ticks = 0xffffffff,
+ .min_delta_ns = TIMER_MIN_DELTA,
+ .min_delta_ticks = TIMER_MIN_DELTA, // microsecond resolution should be enough for anyone, same as 640K RAM
.irq = 0,
.mult = 1,
};
diff --git a/arch/unicore32/kernel/time.c b/arch/unicore32/kernel/time.c
index fceaa673f861..c6b3fa3ee0b6 100644
--- a/arch/unicore32/kernel/time.c
+++ b/arch/unicore32/kernel/time.c
@@ -91,8 +91,10 @@ void __init time_init(void)

ckevt_puv3_osmr0.max_delta_ns =
clockevent_delta2ns(0x7fffffff, &ckevt_puv3_osmr0);
+ ckevt_puv3_osmr0.max_delta_ticks = 0x7fffffff;
ckevt_puv3_osmr0.min_delta_ns =
clockevent_delta2ns(MIN_OSCR_DELTA * 2, &ckevt_puv3_osmr0) + 1;
+ ckevt_puv3_osmr0.min_delta_ticks = MIN_OSCR_DELTA * 2;
ckevt_puv3_osmr0.cpumask = cpumask_of(0);

setup_irq(IRQ_TIMER0, &puv3_timer_irq);
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index 8ccb7ef512e0..875091d4609d 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -731,8 +731,10 @@ static int __init calibrate_APIC_clock(void)
TICK_NSEC, lapic_clockevent.shift);
lapic_clockevent.max_delta_ns =
clockevent_delta2ns(0x7FFFFF, &lapic_clockevent);
+ lapic_clockevent.max_delta_ticks = 0x7FFFFF;
lapic_clockevent.min_delta_ns =
clockevent_delta2ns(0xF, &lapic_clockevent);
+ lapic_clockevent.min_delta_ticks = 0xF;
lapic_clockevent.features &= ~CLOCK_EVT_FEAT_DUMMY;
return 0;
}
@@ -778,8 +780,10 @@ static int __init calibrate_APIC_clock(void)
lapic_clockevent.shift);
lapic_clockevent.max_delta_ns =
clockevent_delta2ns(0x7FFFFFFF, &lapic_clockevent);
+ lapic_clockevent.max_delta_ticks = 0x7FFFFFFF;
lapic_clockevent.min_delta_ns =
clockevent_delta2ns(0xF, &lapic_clockevent);
+ lapic_clockevent.min_delta_ticks = 0xF;

lapic_timer_frequency = (delta * APIC_DIVISOR) / LAPIC_CAL_LOOPS;

diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
index d3289d7e78fa..3e4bf887a246 100644
--- a/arch/x86/lguest/boot.c
+++ b/arch/x86/lguest/boot.c
@@ -994,7 +994,9 @@ static struct clock_event_device lguest_clockevent = {
.mult = 1,
.shift = 0,
.min_delta_ns = LG_CLOCK_MIN_DELTA,
+ .min_delta_ticks = LG_CLOCK_MIN_DELTA,
.max_delta_ns = LG_CLOCK_MAX_DELTA,
+ .max_delta_ticks = LG_CLOCK_MAX_DELTA,
};

/*
diff --git a/arch/x86/platform/uv/uv_time.c b/arch/x86/platform/uv/uv_time.c
index 2ee7632d4916..b082d71b08ee 100644
--- a/arch/x86/platform/uv/uv_time.c
+++ b/arch/x86/platform/uv/uv_time.c
@@ -390,9 +390,11 @@ static __init int uv_rtc_setup_clock(void)

clock_event_device_uv.min_delta_ns = NSEC_PER_SEC /
sn_rtc_cycles_per_second;
+ clock_event_device_uv.min_delta_ticks = 1;

clock_event_device_uv.max_delta_ns = clocksource_uv.mask *
(NSEC_PER_SEC / sn_rtc_cycles_per_second);
+ clock_event_device_uv.max_delta_ticks = clocksource_uv.mask;

rc = schedule_on_each_cpu(uv_rtc_register_clockevents);
if (rc) {
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 1e69956d7852..7a3089285c59 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -209,7 +209,9 @@ static const struct clock_event_device xen_timerop_clockevent = {
.features = CLOCK_EVT_FEAT_ONESHOT,

.max_delta_ns = 0xffffffff,
+ .max_delta_ticks = 0xffffffff,
.min_delta_ns = TIMER_SLOP,
+ .min_delta_ticks = TIMER_SLOP,

.mult = 1,
.shift = 0,
@@ -268,7 +270,9 @@ static const struct clock_event_device xen_vcpuop_clockevent = {
.features = CLOCK_EVT_FEAT_ONESHOT,

.max_delta_ns = 0xffffffff,
+ .max_delta_ticks = 0xffffffff,
.min_delta_ns = TIMER_SLOP,
+ .min_delta_ticks = TIMER_SLOP,

.mult = 1,
.shift = 0,
diff --git a/drivers/acpi/arm64/Kconfig b/drivers/acpi/arm64/Kconfig
index 4616da4c15be..5a6f80fce0d6 100644
--- a/drivers/acpi/arm64/Kconfig
+++ b/drivers/acpi/arm64/Kconfig
@@ -4,3 +4,6 @@

config ACPI_IORT
bool
+
+config ACPI_GTDT
+ bool
diff --git a/drivers/acpi/arm64/Makefile b/drivers/acpi/arm64/Makefile
index 72331f2ce0e9..1017def2ea12 100644
--- a/drivers/acpi/arm64/Makefile
+++ b/drivers/acpi/arm64/Makefile
@@ -1 +1,2 @@
obj-$(CONFIG_ACPI_IORT) += iort.o
+obj-$(CONFIG_ACPI_GTDT) += gtdt.o
diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
new file mode 100644
index 000000000000..597a737d538f
--- /dev/null
+++ b/drivers/acpi/arm64/gtdt.c
@@ -0,0 +1,417 @@
+/*
+ * ARM Specific GTDT table Support
+ *
+ * Copyright (C) 2016, Linaro Ltd.
+ * Author: Daniel Lezcano <daniel.lezcano@xxxxxxxxxx>
+ * Fu Wei <fu.wei@xxxxxxxxxx>
+ * Hanjun Guo <hanjun.guo@xxxxxxxxxx>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/acpi.h>
+#include <linux/init.h>
+#include <linux/irqdomain.h>
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+
+#include <clocksource/arm_arch_timer.h>
+
+#undef pr_fmt
+#define pr_fmt(fmt) "ACPI GTDT: " fmt
+
+/**
+ * struct acpi_gtdt_descriptor - Store the key info of GTDT for all functions
+ * @gtdt: The pointer to the struct acpi_table_gtdt of GTDT table.
+ * @gtdt_end: The pointer to the end of GTDT table.
+ * @platform_timer: The pointer to the start of Platform Timer Structure
+ *
+ * The struct store the key info of GTDT table, it should be initialized by
+ * acpi_gtdt_init.
+ */
+struct acpi_gtdt_descriptor {
+ struct acpi_table_gtdt *gtdt;
+ void *gtdt_end;
+ void *platform_timer;
+};
+
+static struct acpi_gtdt_descriptor acpi_gtdt_desc __initdata;
+
+static inline void *next_platform_timer(void *platform_timer)
+{
+ struct acpi_gtdt_header *gh = platform_timer;
+
+ platform_timer += gh->length;
+ if (platform_timer < acpi_gtdt_desc.gtdt_end)
+ return platform_timer;
+
+ return NULL;
+}
+
+#define for_each_platform_timer(_g) \
+ for (_g = acpi_gtdt_desc.platform_timer; _g; \
+ _g = next_platform_timer(_g))
+
+static inline bool is_timer_block(void *platform_timer)
+{
+ struct acpi_gtdt_header *gh = platform_timer;
+
+ return gh->type == ACPI_GTDT_TYPE_TIMER_BLOCK;
+}
+
+static inline bool is_non_secure_watchdog(void *platform_timer)
+{
+ struct acpi_gtdt_header *gh = platform_timer;
+ struct acpi_gtdt_watchdog *wd = platform_timer;
+
+ if (gh->type != ACPI_GTDT_TYPE_WATCHDOG)
+ return false;
+
+ return !(wd->timer_flags & ACPI_GTDT_WATCHDOG_SECURE);
+}
+
+static int __init map_gt_gsi(u32 interrupt, u32 flags)
+{
+ int trigger, polarity;
+
+ trigger = (flags & ACPI_GTDT_INTERRUPT_MODE) ? ACPI_EDGE_SENSITIVE
+ : ACPI_LEVEL_SENSITIVE;
+
+ polarity = (flags & ACPI_GTDT_INTERRUPT_POLARITY) ? ACPI_ACTIVE_LOW
+ : ACPI_ACTIVE_HIGH;
+
+ return acpi_register_gsi(NULL, interrupt, trigger, polarity);
+}
+
+/**
+ * acpi_gtdt_map_ppi() - Map the PPIs of per-cpu arch_timer.
+ * @type: the type of PPI.
+ *
+ * Note: Secure state is not managed by the kernel on ARM64 systems.
+ * So we only handle the non-secure timer PPIs,
+ * ARCH_TIMER_PHYS_SECURE_PPI is treated as invalid type.
+ *
+ * Return: the mapped PPI value, 0 if error.
+ */
+int __init acpi_gtdt_map_ppi(int type)
+{
+ struct acpi_table_gtdt *gtdt = acpi_gtdt_desc.gtdt;
+
+ switch (type) {
+ case ARCH_TIMER_PHYS_NONSECURE_PPI:
+ return map_gt_gsi(gtdt->non_secure_el1_interrupt,
+ gtdt->non_secure_el1_flags);
+ case ARCH_TIMER_VIRT_PPI:
+ return map_gt_gsi(gtdt->virtual_timer_interrupt,
+ gtdt->virtual_timer_flags);
+
+ case ARCH_TIMER_HYP_PPI:
+ return map_gt_gsi(gtdt->non_secure_el2_interrupt,
+ gtdt->non_secure_el2_flags);
+ default:
+ pr_err("Failed to map timer interrupt: invalid type.\n");
+ }
+
+ return 0;
+}
+
+/**
+ * acpi_gtdt_c3stop() - Got c3stop info from GTDT according to the type of PPI.
+ * @type: the type of PPI.
+ *
+ * Return: true if the timer HW state is lost when a CPU enters an idle state,
+ * false otherwise
+ */
+bool __init acpi_gtdt_c3stop(int type)
+{
+ struct acpi_table_gtdt *gtdt = acpi_gtdt_desc.gtdt;
+
+ switch (type) {
+ case ARCH_TIMER_PHYS_NONSECURE_PPI:
+ return !(gtdt->non_secure_el1_flags & ACPI_GTDT_ALWAYS_ON);
+
+ case ARCH_TIMER_VIRT_PPI:
+ return !(gtdt->virtual_timer_flags & ACPI_GTDT_ALWAYS_ON);
+
+ case ARCH_TIMER_HYP_PPI:
+ return !(gtdt->non_secure_el2_flags & ACPI_GTDT_ALWAYS_ON);
+
+ default:
+ pr_err("Failed to get c3stop info: invalid type.\n");
+ }
+
+ return false;
+}
+
+/**
+ * acpi_gtdt_init() - Get the info of GTDT table to prepare for further init.
+ * @table: The pointer to GTDT table.
+ * @platform_timer_count: It points to a integer variable which is used
+ * for storing the number of platform timers.
+ * This pointer could be NULL, if the caller
+ * doesn't need this info.
+ *
+ * Return: 0 if success, -EINVAL if error.
+ */
+int __init acpi_gtdt_init(struct acpi_table_header *table,
+ int *platform_timer_count)
+{
+ void *platform_timer;
+ struct acpi_table_gtdt *gtdt;
+
+ gtdt = container_of(table, struct acpi_table_gtdt, header);
+ acpi_gtdt_desc.gtdt = gtdt;
+ acpi_gtdt_desc.gtdt_end = (void *)table + table->length;
+ acpi_gtdt_desc.platform_timer = NULL;
+ if (platform_timer_count)
+ *platform_timer_count = 0;
+
+ if (table->revision < 2) {
+ pr_warn("Revision:%d doesn't support Platform Timers.\n",
+ table->revision);
+ return 0;
+ }
+
+ if (!gtdt->platform_timer_count) {
+ pr_debug("No Platform Timer.\n");
+ return 0;
+ }
+
+ platform_timer = (void *)gtdt + gtdt->platform_timer_offset;
+ if (platform_timer < (void *)table + sizeof(struct acpi_table_gtdt)) {
+ pr_err(FW_BUG "invalid timer data.\n");
+ return -EINVAL;
+ }
+ acpi_gtdt_desc.platform_timer = platform_timer;
+ if (platform_timer_count)
+ *platform_timer_count = gtdt->platform_timer_count;
+
+ return 0;
+}
+
+static int __init gtdt_parse_timer_block(struct acpi_gtdt_timer_block *block,
+ struct arch_timer_mem *timer_mem)
+{
+ int i;
+ struct arch_timer_mem_frame *frame;
+ struct acpi_gtdt_timer_entry *gtdt_frame;
+
+ if (!block->timer_count) {
+ pr_err(FW_BUG "GT block present, but frame count is zero.");
+ return -ENODEV;
+ }
+
+ if (block->timer_count > ARCH_TIMER_MEM_MAX_FRAMES) {
+ pr_err(FW_BUG "GT block lists %d frames, ACPI spec only allows 8\n",
+ block->timer_count);
+ return -EINVAL;
+ }
+
+ timer_mem->cntctlbase = (phys_addr_t)block->block_address;
+ /*
+ * The CNTCTLBase frame is 4KB (register offsets 0x000 - 0xFFC).
+ * See ARM DDI 0487A.k_iss10775, page I1-5129, Table I1-3
+ * "CNTCTLBase memory map".
+ */
+ timer_mem->size = SZ_4K;
+
+ gtdt_frame = (void *)block + block->timer_offset;
+ if (gtdt_frame + block->timer_count != (void *)block + block->header.length)
+ return -EINVAL;
+
+ /*
+ * Get the GT timer Frame data for every GT Block Timer
+ */
+ for (i = 0; i < block->timer_count; i++, gtdt_frame++) {
+ if (gtdt_frame->common_flags & ACPI_GTDT_GT_IS_SECURE_TIMER)
+ continue;
+ if (gtdt_frame->frame_number >= ARCH_TIMER_MEM_MAX_FRAMES ||
+ !gtdt_frame->base_address || !gtdt_frame->timer_interrupt)
+ goto error;
+
+ frame = &timer_mem->frame[gtdt_frame->frame_number];
+
+ /* duplicate frame */
+ if (frame->valid)
+ goto error;
+
+ frame->phys_irq = map_gt_gsi(gtdt_frame->timer_interrupt,
+ gtdt_frame->timer_flags);
+ if (frame->phys_irq <= 0) {
+ pr_warn("failed to map physical timer irq in frame %d.\n",
+ gtdt_frame->frame_number);
+ goto error;
+ }
+
+ if (gtdt_frame->virtual_timer_interrupt) {
+ frame->virt_irq =
+ map_gt_gsi(gtdt_frame->virtual_timer_interrupt,
+ gtdt_frame->virtual_timer_flags);
+ if (frame->virt_irq <= 0) {
+ pr_warn("failed to map virtual timer irq in frame %d.\n",
+ gtdt_frame->frame_number);
+ goto error;
+ }
+ } else {
+ pr_debug("virtual timer in frame %d not implemented.\n",
+ gtdt_frame->frame_number);
+ }
+
+ frame->cntbase = gtdt_frame->base_address;
+ /*
+ * The CNTBaseN frame is 4KB (register offsets 0x000 - 0xFFC).
+ * See ARM DDI 0487A.k_iss10775, page I1-5130, Table I1-4
+ * "CNTBaseN memory map".
+ */
+ frame->size = SZ_4K;
+ frame->valid = true;
+ }
+
+ return 0;
+
+error:
+ do {
+ if (gtdt_frame->common_flags & ACPI_GTDT_GT_IS_SECURE_TIMER ||
+ gtdt_frame->frame_number >= ARCH_TIMER_MEM_MAX_FRAMES)
+ continue;
+
+ frame = &timer_mem->frame[gtdt_frame->frame_number];
+
+ if (frame->phys_irq > 0)
+ acpi_unregister_gsi(gtdt_frame->timer_interrupt);
+ frame->phys_irq = 0;
+
+ if (frame->virt_irq > 0)
+ acpi_unregister_gsi(gtdt_frame->virtual_timer_interrupt);
+ frame->virt_irq = 0;
+ } while (i-- >= 0 && gtdt_frame--);
+
+ return -EINVAL;
+}
+
+/**
+ * acpi_arch_timer_mem_init() - Get the info of all GT blocks in GTDT table.
+ * @timer_mem: The pointer to the array of struct arch_timer_mem for returning
+ * the result of parsing. The element number of this array should
+ * be platform_timer_count(the total number of platform timers).
+ * @timer_count: It points to a integer variable which is used for storing the
+ * number of GT blocks we have parsed.
+ *
+ * Return: 0 if success, -EINVAL/-ENODEV if error.
+ */
+int __init acpi_arch_timer_mem_init(struct arch_timer_mem *timer_mem,
+ int *timer_count)
+{
+ int ret;
+ void *platform_timer;
+
+ *timer_count = 0;
+ for_each_platform_timer(platform_timer) {
+ if (is_timer_block(platform_timer)) {
+ ret = gtdt_parse_timer_block(platform_timer, timer_mem);
+ if (ret)
+ return ret;
+ timer_mem++;
+ (*timer_count)++;
+ }
+ }
+
+ if (*timer_count)
+ pr_info("found %d memory-mapped timer block(s).\n",
+ *timer_count);
+
+ return 0;
+}
+
+/*
+ * Initialize a SBSA generic Watchdog platform device info from GTDT
+ */
+static int __init gtdt_import_sbsa_gwdt(struct acpi_gtdt_watchdog *wd,
+ int index)
+{
+ struct platform_device *pdev;
+ int irq = map_gt_gsi(wd->timer_interrupt, wd->timer_flags);
+
+ /*
+ * According to SBSA specification the size of refresh and control
+ * frames of SBSA Generic Watchdog is SZ_4K(Offset 0x000 â 0xFFF).
+ */
+ struct resource res[] = {
+ DEFINE_RES_MEM(wd->control_frame_address, SZ_4K),
+ DEFINE_RES_MEM(wd->refresh_frame_address, SZ_4K),
+ DEFINE_RES_IRQ(irq),
+ };
+ int nr_res = ARRAY_SIZE(res);
+
+ pr_debug("found a Watchdog (0x%llx/0x%llx gsi:%u flags:0x%x).\n",
+ wd->refresh_frame_address, wd->control_frame_address,
+ wd->timer_interrupt, wd->timer_flags);
+
+ if (!(wd->refresh_frame_address && wd->control_frame_address)) {
+ pr_err(FW_BUG "failed to get the Watchdog base address.\n");
+ acpi_unregister_gsi(wd->timer_interrupt);
+ return -EINVAL;
+ }
+
+ if (irq <= 0) {
+ pr_warn("failed to map the Watchdog interrupt.\n");
+ nr_res--;
+ }
+
+ /*
+ * Add a platform device named "sbsa-gwdt" to match the platform driver.
+ * "sbsa-gwdt": SBSA(Server Base System Architecture) Generic Watchdog
+ * The platform driver can get device info below by matching this name.
+ */
+ pdev = platform_device_register_simple("sbsa-gwdt", index, res, nr_res);
+ if (IS_ERR(pdev)) {
+ acpi_unregister_gsi(wd->timer_interrupt);
+ return PTR_ERR(pdev);
+ }
+
+ return 0;
+}
+
+static int __init gtdt_sbsa_gwdt_init(void)
+{
+ void *platform_timer;
+ struct acpi_table_header *table;
+ int ret, timer_count, gwdt_count = 0;
+
+ if (acpi_disabled)
+ return 0;
+
+ if (ACPI_FAILURE(acpi_get_table(ACPI_SIG_GTDT, 0, &table)))
+ return -EINVAL;
+
+ /*
+ * Note: Even though the global variable acpi_gtdt_desc has been
+ * initialized by acpi_gtdt_init() while initializing the arch timers,
+ * when we call this function to get SBSA watchdogs info from GTDT, the
+ * pointers stashed in it are stale (since they are early temporary
+ * mappings carried out before acpi_permanent_mmap is set) and we need
+ * to re-initialize them with permanent mapped pointer values to let the
+ * GTDT parsing possible.
+ */
+ ret = acpi_gtdt_init(table, &timer_count);
+ if (ret || !timer_count)
+ return ret;
+
+ for_each_platform_timer(platform_timer) {
+ if (is_non_secure_watchdog(platform_timer)) {
+ ret = gtdt_import_sbsa_gwdt(platform_timer, gwdt_count);
+ if (ret)
+ break;
+ gwdt_count++;
+ }
+ }
+
+ if (gwdt_count)
+ pr_info("found %d SBSA generic Watchdog(s).\n", gwdt_count);
+
+ return ret;
+}
+
+device_initcall(gtdt_sbsa_gwdt_init);
diff --git a/drivers/char/mmtimer.c b/drivers/char/mmtimer.c
index b708c85dc9c1..0e7fcb04f01e 100644
--- a/drivers/char/mmtimer.c
+++ b/drivers/char/mmtimer.c
@@ -478,18 +478,18 @@ static int sgi_clock_period;
static struct timespec sgi_clock_offset;
static int sgi_clock_period;

-static int sgi_clock_get(clockid_t clockid, struct timespec *tp)
+static int sgi_clock_get(clockid_t clockid, struct timespec64 *tp)
{
u64 nsec;

nsec = rtc_time() * sgi_clock_period
+ sgi_clock_offset.tv_nsec;
- *tp = ns_to_timespec(nsec);
+ *tp = ns_to_timespec64(nsec);
tp->tv_sec += sgi_clock_offset.tv_sec;
return 0;
};

-static int sgi_clock_set(const clockid_t clockid, const struct timespec *tp)
+static int sgi_clock_set(const clockid_t clockid, const struct timespec64 *tp)
{

u64 nsec;
@@ -657,7 +657,7 @@ static int sgi_timer_del(struct k_itimer *timr)
}

/* Assumption: it_lock is already held with irq's disabled */
-static void sgi_timer_get(struct k_itimer *timr, struct itimerspec *cur_setting)
+static void sgi_timer_get(struct k_itimer *timr, struct itimerspec64 *cur_setting)
{

if (timr->it.mmtimer.clock == TIMER_OFF) {
@@ -668,14 +668,14 @@ static void sgi_timer_get(struct k_itimer *timr, struct itimerspec *cur_setting)
return;
}

- cur_setting->it_interval = ns_to_timespec(timr->it.mmtimer.incr * sgi_clock_period);
- cur_setting->it_value = ns_to_timespec((timr->it.mmtimer.expires - rtc_time()) * sgi_clock_period);
+ cur_setting->it_interval = ns_to_timespec64(timr->it.mmtimer.incr * sgi_clock_period);
+ cur_setting->it_value = ns_to_timespec64((timr->it.mmtimer.expires - rtc_time()) * sgi_clock_period);
}


static int sgi_timer_set(struct k_itimer *timr, int flags,
- struct itimerspec * new_setting,
- struct itimerspec * old_setting)
+ struct itimerspec64 *new_setting,
+ struct itimerspec64 *old_setting)
{
unsigned long when, period, irqflags;
int err = 0;
@@ -687,8 +687,8 @@ static int sgi_timer_set(struct k_itimer *timr, int flags,
sgi_timer_get(timr, old_setting);

sgi_timer_del(timr);
- when = timespec_to_ns(&new_setting->it_value);
- period = timespec_to_ns(&new_setting->it_interval);
+ when = timespec64_to_ns(&new_setting->it_value);
+ period = timespec64_to_ns(&new_setting->it_interval);

if (when == 0)
/* Clear timer */
@@ -699,11 +699,11 @@ static int sgi_timer_set(struct k_itimer *timr, int flags,
return -ENOMEM;

if (flags & TIMER_ABSTIME) {
- struct timespec n;
+ struct timespec64 n;
unsigned long now;

- getnstimeofday(&n);
- now = timespec_to_ns(&n);
+ getnstimeofday64(&n);
+ now = timespec64_to_ns(&n);
if (when > now)
when -= now;
else
@@ -765,7 +765,7 @@ static int sgi_timer_set(struct k_itimer *timr, int flags,
return err;
}

-static int sgi_clock_getres(const clockid_t which_clock, struct timespec *tp)
+static int sgi_clock_getres(const clockid_t which_clock, struct timespec64 *tp)
{
tp->tv_sec = 0;
tp->tv_nsec = sgi_clock_period;
diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
index 3356ab821624..545d541ae20e 100644
--- a/drivers/clocksource/Kconfig
+++ b/drivers/clocksource/Kconfig
@@ -67,20 +67,22 @@ config DW_APB_TIMER_OF
select DW_APB_TIMER
select CLKSRC_OF

-config GEMINI_TIMER
- bool "Cortina Gemini timer driver" if COMPILE_TEST
+config FTTMR010_TIMER
+ bool "Faraday Technology timer driver" if COMPILE_TEST
depends on GENERIC_CLOCKEVENTS
depends on HAS_IOMEM
select CLKSRC_MMIO
select CLKSRC_OF
select MFD_SYSCON
help
- Enables support for the Gemini timer
+ Enables support for the Faraday Technology timer block
+ FTTMR010.

config ROCKCHIP_TIMER
bool "Rockchip timer driver" if COMPILE_TEST
depends on ARM || ARM64
select CLKSRC_OF
+ select CLKSRC_MMIO
help
Enables the support for the rockchip timer driver.

@@ -366,6 +368,17 @@ config HISILICON_ERRATUM_161010101
161010101. The workaround will be active if the hisilicon,erratum-161010101
property is found in the timer node.

+config ARM64_ERRATUM_858921
+ bool "Workaround for Cortex-A73 erratum 858921"
+ default y
+ select ARM_ARCH_TIMER_OOL_WORKAROUND
+ depends on ARM_ARCH_TIMER && ARM64
+ help
+ This option enables a workaround applicable to Cortex-A73
+ (all versions), whose counter may return incorrect values.
+ The workaround will be dynamically enabled when an affected
+ core is detected.
+
config ARM_GLOBAL_TIMER
bool "Support for the ARM global timer" if COMPILE_TEST
select CLKSRC_OF if OF
diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile
index d227d1314f14..2b5b56a6f00f 100644
--- a/drivers/clocksource/Makefile
+++ b/drivers/clocksource/Makefile
@@ -17,7 +17,7 @@ obj-$(CONFIG_CLKSRC_MMIO) += mmio.o
obj-$(CONFIG_DIGICOLOR_TIMER) += timer-digicolor.o
obj-$(CONFIG_DW_APB_TIMER) += dw_apb_timer.o
obj-$(CONFIG_DW_APB_TIMER_OF) += dw_apb_timer_of.o
-obj-$(CONFIG_GEMINI_TIMER) += timer-gemini.o
+obj-$(CONFIG_FTTMR010_TIMER) += timer-fttmr010.o
obj-$(CONFIG_ROCKCHIP_TIMER) += rockchip_timer.o
obj-$(CONFIG_CLKSRC_NOMADIK_MTU) += nomadik-mtu.o
obj-$(CONFIG_CLKSRC_DBX500_PRCMU) += clksrc-dbx500-prcmu.o
diff --git a/drivers/clocksource/arc_timer.c b/drivers/clocksource/arc_timer.c
index 7517f959cba7..21649733827d 100644
--- a/drivers/clocksource/arc_timer.c
+++ b/drivers/clocksource/arc_timer.c
@@ -37,7 +37,7 @@ static int noinline arc_get_timer_clk(struct device_node *node)

clk = of_clk_get(node, 0);
if (IS_ERR(clk)) {
- pr_err("timer missing clk");
+ pr_err("timer missing clk\n");
return PTR_ERR(clk);
}

@@ -89,7 +89,7 @@ static int __init arc_cs_setup_gfrc(struct device_node *node)

READ_BCR(ARC_REG_MCIP_BCR, mp);
if (!mp.gfrc) {
- pr_warn("Global-64-bit-Ctr clocksource not detected");
+ pr_warn("Global-64-bit-Ctr clocksource not detected\n");
return -ENXIO;
}

@@ -140,13 +140,13 @@ static int __init arc_cs_setup_rtc(struct device_node *node)

READ_BCR(ARC_REG_TIMERS_BCR, timer);
if (!timer.rtc) {
- pr_warn("Local-64-bit-Ctr clocksource not detected");
+ pr_warn("Local-64-bit-Ctr clocksource not detected\n");
return -ENXIO;
}

/* Local to CPU hence not usable in SMP */
if (IS_ENABLED(CONFIG_SMP)) {
- pr_warn("Local-64-bit-Ctr not usable in SMP");
+ pr_warn("Local-64-bit-Ctr not usable in SMP\n");
return -EINVAL;
}

@@ -290,13 +290,13 @@ static int __init arc_clockevent_setup(struct device_node *node)

arc_timer_irq = irq_of_parse_and_map(node, 0);
if (arc_timer_irq <= 0) {
- pr_err("clockevent: missing irq");
+ pr_err("clockevent: missing irq\n");
return -EINVAL;
}

ret = arc_get_timer_clk(node);
if (ret) {
- pr_err("clockevent: missing clk");
+ pr_err("clockevent: missing clk\n");
return ret;
}

@@ -313,7 +313,7 @@ static int __init arc_clockevent_setup(struct device_node *node)
arc_timer_starting_cpu,
arc_timer_dying_cpu);
if (ret) {
- pr_err("Failed to setup hotplug state");
+ pr_err("Failed to setup hotplug state\n");
return ret;
}
return 0;
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index 7a8a4117f123..a1fb918b8021 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -33,6 +33,9 @@

#include <clocksource/arm_arch_timer.h>

+#undef pr_fmt
+#define pr_fmt(fmt) "arch_timer: " fmt
+
#define CNTTIDR 0x08
#define CNTTIDR_VIRT(n) (BIT(1) << ((n) * 4))

@@ -52,8 +55,6 @@
#define CNTV_TVAL 0x38
#define CNTV_CTL 0x3c

-#define ARCH_CP15_TIMER BIT(0)
-#define ARCH_MEM_TIMER BIT(1)
static unsigned arch_timers_present __initdata;

static void __iomem *arch_counter_base;
@@ -66,23 +67,15 @@ struct arch_timer {
#define to_arch_timer(e) container_of(e, struct arch_timer, evt)

static u32 arch_timer_rate;
-
-enum ppi_nr {
- PHYS_SECURE_PPI,
- PHYS_NONSECURE_PPI,
- VIRT_PPI,
- HYP_PPI,
- MAX_TIMER_PPI
-};
-
-static int arch_timer_ppi[MAX_TIMER_PPI];
+static int arch_timer_ppi[ARCH_TIMER_MAX_TIMER_PPI];

static struct clock_event_device __percpu *arch_timer_evt;

-static enum ppi_nr arch_timer_uses_ppi = VIRT_PPI;
+static enum arch_timer_ppi_nr arch_timer_uses_ppi = ARCH_TIMER_VIRT_PPI;
static bool arch_timer_c3stop;
static bool arch_timer_mem_use_virtual;
static bool arch_counter_suspend_stop;
+static bool vdso_default = true;

static bool evtstrm_enable = IS_ENABLED(CONFIG_ARM_ARCH_TIMER_EVTSTREAM);

@@ -96,6 +89,105 @@ early_param("clocksource.arm_arch_timer.evtstrm", early_evtstrm_cfg);
* Architected system timer support.
*/

+static __always_inline
+void arch_timer_reg_write(int access, enum arch_timer_reg reg, u32 val,
+ struct clock_event_device *clk)
+{
+ if (access == ARCH_TIMER_MEM_PHYS_ACCESS) {
+ struct arch_timer *timer = to_arch_timer(clk);
+ switch (reg) {
+ case ARCH_TIMER_REG_CTRL:
+ writel_relaxed(val, timer->base + CNTP_CTL);
+ break;
+ case ARCH_TIMER_REG_TVAL:
+ writel_relaxed(val, timer->base + CNTP_TVAL);
+ break;
+ }
+ } else if (access == ARCH_TIMER_MEM_VIRT_ACCESS) {
+ struct arch_timer *timer = to_arch_timer(clk);
+ switch (reg) {
+ case ARCH_TIMER_REG_CTRL:
+ writel_relaxed(val, timer->base + CNTV_CTL);
+ break;
+ case ARCH_TIMER_REG_TVAL:
+ writel_relaxed(val, timer->base + CNTV_TVAL);
+ break;
+ }
+ } else {
+ arch_timer_reg_write_cp15(access, reg, val);
+ }
+}
+
+static __always_inline
+u32 arch_timer_reg_read(int access, enum arch_timer_reg reg,
+ struct clock_event_device *clk)
+{
+ u32 val;
+
+ if (access == ARCH_TIMER_MEM_PHYS_ACCESS) {
+ struct arch_timer *timer = to_arch_timer(clk);
+ switch (reg) {
+ case ARCH_TIMER_REG_CTRL:
+ val = readl_relaxed(timer->base + CNTP_CTL);
+ break;
+ case ARCH_TIMER_REG_TVAL:
+ val = readl_relaxed(timer->base + CNTP_TVAL);
+ break;
+ }
+ } else if (access == ARCH_TIMER_MEM_VIRT_ACCESS) {
+ struct arch_timer *timer = to_arch_timer(clk);
+ switch (reg) {
+ case ARCH_TIMER_REG_CTRL:
+ val = readl_relaxed(timer->base + CNTV_CTL);
+ break;
+ case ARCH_TIMER_REG_TVAL:
+ val = readl_relaxed(timer->base + CNTV_TVAL);
+ break;
+ }
+ } else {
+ val = arch_timer_reg_read_cp15(access, reg);
+ }
+
+ return val;
+}
+
+/*
+ * Default to cp15 based access because arm64 uses this function for
+ * sched_clock() before DT is probed and the cp15 method is guaranteed
+ * to exist on arm64. arm doesn't use this before DT is probed so even
+ * if we don't have the cp15 accessors we won't have a problem.
+ */
+u64 (*arch_timer_read_counter)(void) = arch_counter_get_cntvct;
+
+static u64 arch_counter_read(struct clocksource *cs)
+{
+ return arch_timer_read_counter();
+}
+
+static u64 arch_counter_read_cc(const struct cyclecounter *cc)
+{
+ return arch_timer_read_counter();
+}
+
+static struct clocksource clocksource_counter = {
+ .name = "arch_sys_counter",
+ .rating = 400,
+ .read = arch_counter_read,
+ .mask = CLOCKSOURCE_MASK(56),
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS,
+};
+
+static struct cyclecounter cyclecounter __ro_after_init = {
+ .read = arch_counter_read_cc,
+ .mask = CLOCKSOURCE_MASK(56),
+};
+
+struct ate_acpi_oem_info {
+ char oem_id[ACPI_OEM_ID_SIZE + 1];
+ char oem_table_id[ACPI_OEM_TABLE_ID_SIZE + 1];
+ u32 oem_revision;
+};
+
#ifdef CONFIG_FSL_ERRATUM_A008585
/*
* The number of retries is an arbitrary value well beyond the highest number
@@ -170,97 +262,289 @@ static u64 notrace hisi_161010101_read_cntvct_el0(void)
{
return __hisi_161010101_read_reg(cntvct_el0);
}
+
+static struct ate_acpi_oem_info hisi_161010101_oem_info[] = {
+ /*
+ * Note that trailing spaces are required to properly match
+ * the OEM table information.
+ */
+ {
+ .oem_id = "HISI ",
+ .oem_table_id = "HIP05 ",
+ .oem_revision = 0,
+ },
+ {
+ .oem_id = "HISI ",
+ .oem_table_id = "HIP06 ",
+ .oem_revision = 0,
+ },
+ {
+ .oem_id = "HISI ",
+ .oem_table_id = "HIP07 ",
+ .oem_revision = 0,
+ },
+ { /* Sentinel indicating the end of the OEM array */ },
+};
+#endif
+
+#ifdef CONFIG_ARM64_ERRATUM_858921
+static u64 notrace arm64_858921_read_cntvct_el0(void)
+{
+ u64 old, new;
+
+ old = read_sysreg(cntvct_el0);
+ new = read_sysreg(cntvct_el0);
+ return (((old ^ new) >> 32) & 1) ? old : new;
+}
#endif

#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
-const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround = NULL;
+DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *,
+ timer_unstable_counter_workaround);
EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround);

DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled);
EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled);

+static void erratum_set_next_event_tval_generic(const int access, unsigned long evt,
+ struct clock_event_device *clk)
+{
+ unsigned long ctrl;
+ u64 cval = evt + arch_counter_get_cntvct();
+
+ ctrl = arch_timer_reg_read(access, ARCH_TIMER_REG_CTRL, clk);
+ ctrl |= ARCH_TIMER_CTRL_ENABLE;
+ ctrl &= ~ARCH_TIMER_CTRL_IT_MASK;
+
+ if (access == ARCH_TIMER_PHYS_ACCESS)
+ write_sysreg(cval, cntp_cval_el0);
+ else
+ write_sysreg(cval, cntv_cval_el0);
+
+ arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk);
+}
+
+static __maybe_unused int erratum_set_next_event_tval_virt(unsigned long evt,
+ struct clock_event_device *clk)
+{
+ erratum_set_next_event_tval_generic(ARCH_TIMER_VIRT_ACCESS, evt, clk);
+ return 0;
+}
+
+static __maybe_unused int erratum_set_next_event_tval_phys(unsigned long evt,
+ struct clock_event_device *clk)
+{
+ erratum_set_next_event_tval_generic(ARCH_TIMER_PHYS_ACCESS, evt, clk);
+ return 0;
+}
+
static const struct arch_timer_erratum_workaround ool_workarounds[] = {
#ifdef CONFIG_FSL_ERRATUM_A008585
{
+ .match_type = ate_match_dt,
.id = "fsl,erratum-a008585",
+ .desc = "Freescale erratum a005858",
.read_cntp_tval_el0 = fsl_a008585_read_cntp_tval_el0,
.read_cntv_tval_el0 = fsl_a008585_read_cntv_tval_el0,
.read_cntvct_el0 = fsl_a008585_read_cntvct_el0,
+ .set_next_event_phys = erratum_set_next_event_tval_phys,
+ .set_next_event_virt = erratum_set_next_event_tval_virt,
},
#endif
#ifdef CONFIG_HISILICON_ERRATUM_161010101
{
+ .match_type = ate_match_dt,
.id = "hisilicon,erratum-161010101",
+ .desc = "HiSilicon erratum 161010101",
.read_cntp_tval_el0 = hisi_161010101_read_cntp_tval_el0,
.read_cntv_tval_el0 = hisi_161010101_read_cntv_tval_el0,
.read_cntvct_el0 = hisi_161010101_read_cntvct_el0,
+ .set_next_event_phys = erratum_set_next_event_tval_phys,
+ .set_next_event_virt = erratum_set_next_event_tval_virt,
+ },
+ {
+ .match_type = ate_match_acpi_oem_info,
+ .id = hisi_161010101_oem_info,
+ .desc = "HiSilicon erratum 161010101",
+ .read_cntp_tval_el0 = hisi_161010101_read_cntp_tval_el0,
+ .read_cntv_tval_el0 = hisi_161010101_read_cntv_tval_el0,
+ .read_cntvct_el0 = hisi_161010101_read_cntvct_el0,
+ .set_next_event_phys = erratum_set_next_event_tval_phys,
+ .set_next_event_virt = erratum_set_next_event_tval_virt,
+ },
+#endif
+#ifdef CONFIG_ARM64_ERRATUM_858921
+ {
+ .match_type = ate_match_local_cap_id,
+ .id = (void *)ARM64_WORKAROUND_858921,
+ .desc = "ARM erratum 858921",
+ .read_cntvct_el0 = arm64_858921_read_cntvct_el0,
},
#endif
};
-#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */

-static __always_inline
-void arch_timer_reg_write(int access, enum arch_timer_reg reg, u32 val,
- struct clock_event_device *clk)
+typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *,
+ const void *);
+
+static
+bool arch_timer_check_dt_erratum(const struct arch_timer_erratum_workaround *wa,
+ const void *arg)
{
- if (access == ARCH_TIMER_MEM_PHYS_ACCESS) {
- struct arch_timer *timer = to_arch_timer(clk);
- switch (reg) {
- case ARCH_TIMER_REG_CTRL:
- writel_relaxed(val, timer->base + CNTP_CTL);
- break;
- case ARCH_TIMER_REG_TVAL:
- writel_relaxed(val, timer->base + CNTP_TVAL);
- break;
- }
- } else if (access == ARCH_TIMER_MEM_VIRT_ACCESS) {
- struct arch_timer *timer = to_arch_timer(clk);
- switch (reg) {
- case ARCH_TIMER_REG_CTRL:
- writel_relaxed(val, timer->base + CNTV_CTL);
- break;
- case ARCH_TIMER_REG_TVAL:
- writel_relaxed(val, timer->base + CNTV_TVAL);
- break;
- }
- } else {
- arch_timer_reg_write_cp15(access, reg, val);
+ const struct device_node *np = arg;
+
+ return of_property_read_bool(np, wa->id);
+}
+
+static
+bool arch_timer_check_local_cap_erratum(const struct arch_timer_erratum_workaround *wa,
+ const void *arg)
+{
+ return this_cpu_has_cap((uintptr_t)wa->id);
+}
+
+
+static
+bool arch_timer_check_acpi_oem_erratum(const struct arch_timer_erratum_workaround *wa,
+ const void *arg)
+{
+ static const struct ate_acpi_oem_info empty_oem_info = {};
+ const struct ate_acpi_oem_info *info = wa->id;
+ const struct acpi_table_header *table = arg;
+
+ /* Iterate over the ACPI OEM info array, looking for a match */
+ while (memcmp(info, &empty_oem_info, sizeof(*info))) {
+ if (!memcmp(info->oem_id, table->oem_id, ACPI_OEM_ID_SIZE) &&
+ !memcmp(info->oem_table_id, table->oem_table_id, ACPI_OEM_TABLE_ID_SIZE) &&
+ info->oem_revision == table->oem_revision)
+ return true;
+
+ info++;
}
+
+ return false;
}

-static __always_inline
-u32 arch_timer_reg_read(int access, enum arch_timer_reg reg,
- struct clock_event_device *clk)
+static const struct arch_timer_erratum_workaround *
+arch_timer_iterate_errata(enum arch_timer_erratum_match_type type,
+ ate_match_fn_t match_fn,
+ void *arg)
{
- u32 val;
+ int i;

- if (access == ARCH_TIMER_MEM_PHYS_ACCESS) {
- struct arch_timer *timer = to_arch_timer(clk);
- switch (reg) {
- case ARCH_TIMER_REG_CTRL:
- val = readl_relaxed(timer->base + CNTP_CTL);
- break;
- case ARCH_TIMER_REG_TVAL:
- val = readl_relaxed(timer->base + CNTP_TVAL);
- break;
- }
- } else if (access == ARCH_TIMER_MEM_VIRT_ACCESS) {
- struct arch_timer *timer = to_arch_timer(clk);
- switch (reg) {
- case ARCH_TIMER_REG_CTRL:
- val = readl_relaxed(timer->base + CNTV_CTL);
- break;
- case ARCH_TIMER_REG_TVAL:
- val = readl_relaxed(timer->base + CNTV_TVAL);
- break;
- }
+ for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) {
+ if (ool_workarounds[i].match_type != type)
+ continue;
+
+ if (match_fn(&ool_workarounds[i], arg))
+ return &ool_workarounds[i];
+ }
+
+ return NULL;
+}
+
+static
+void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa,
+ bool local)
+{
+ int i;
+
+ if (local) {
+ __this_cpu_write(timer_unstable_counter_workaround, wa);
} else {
- val = arch_timer_reg_read_cp15(access, reg);
+ for_each_possible_cpu(i)
+ per_cpu(timer_unstable_counter_workaround, i) = wa;
}

- return val;
+ static_branch_enable(&arch_timer_read_ool_enabled);
+
+ /*
+ * Don't use the vdso fastpath if errata require using the
+ * out-of-line counter accessor. We may change our mind pretty
+ * late in the game (with a per-CPU erratum, for example), so
+ * change both the default value and the vdso itself.
+ */
+ if (wa->read_cntvct_el0) {
+ clocksource_counter.archdata.vdso_direct = false;
+ vdso_default = false;
+ }
+}
+
+static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type type,
+ void *arg)
+{
+ const struct arch_timer_erratum_workaround *wa;
+ ate_match_fn_t match_fn = NULL;
+ bool local = false;
+
+ switch (type) {
+ case ate_match_dt:
+ match_fn = arch_timer_check_dt_erratum;
+ break;
+ case ate_match_local_cap_id:
+ match_fn = arch_timer_check_local_cap_erratum;
+ local = true;
+ break;
+ case ate_match_acpi_oem_info:
+ match_fn = arch_timer_check_acpi_oem_erratum;
+ break;
+ default:
+ WARN_ON(1);
+ return;
+ }
+
+ wa = arch_timer_iterate_errata(type, match_fn, arg);
+ if (!wa)
+ return;
+
+ if (needs_unstable_timer_counter_workaround()) {
+ const struct arch_timer_erratum_workaround *__wa;
+ __wa = __this_cpu_read(timer_unstable_counter_workaround);
+ if (__wa && wa != __wa)
+ pr_warn("Can't enable workaround for %s (clashes with %s\n)",
+ wa->desc, __wa->desc);
+
+ if (__wa)
+ return;
+ }
+
+ arch_timer_enable_workaround(wa, local);
+ pr_info("Enabling %s workaround for %s\n",
+ local ? "local" : "global", wa->desc);
}

+#define erratum_handler(fn, r, ...) \
+({ \
+ bool __val; \
+ if (needs_unstable_timer_counter_workaround()) { \
+ const struct arch_timer_erratum_workaround *__wa; \
+ __wa = __this_cpu_read(timer_unstable_counter_workaround); \
+ if (__wa && __wa->fn) { \
+ r = __wa->fn(__VA_ARGS__); \
+ __val = true; \
+ } else { \
+ __val = false; \
+ } \
+ } else { \
+ __val = false; \
+ } \
+ __val; \
+})
+
+static bool arch_timer_this_cpu_has_cntvct_wa(void)
+{
+ const struct arch_timer_erratum_workaround *wa;
+
+ wa = __this_cpu_read(timer_unstable_counter_workaround);
+ return wa && wa->read_cntvct_el0;
+}
+#else
+#define arch_timer_check_ool_workaround(t,a) do { } while(0)
+#define erratum_set_next_event_tval_virt(...) ({BUG(); 0;})
+#define erratum_set_next_event_tval_phys(...) ({BUG(); 0;})
+#define erratum_handler(fn, r, ...) ({false;})
+#define arch_timer_this_cpu_has_cntvct_wa() ({false;})
+#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */
+
static __always_inline irqreturn_t timer_handler(const int access,
struct clock_event_device *evt)
{
@@ -348,43 +632,14 @@ static __always_inline void set_next_event(const int access, unsigned long evt,
arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk);
}

-#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
-static __always_inline void erratum_set_next_event_generic(const int access,
- unsigned long evt, struct clock_event_device *clk)
-{
- unsigned long ctrl;
- u64 cval = evt + arch_counter_get_cntvct();
-
- ctrl = arch_timer_reg_read(access, ARCH_TIMER_REG_CTRL, clk);
- ctrl |= ARCH_TIMER_CTRL_ENABLE;
- ctrl &= ~ARCH_TIMER_CTRL_IT_MASK;
-
- if (access == ARCH_TIMER_PHYS_ACCESS)
- write_sysreg(cval, cntp_cval_el0);
- else if (access == ARCH_TIMER_VIRT_ACCESS)
- write_sysreg(cval, cntv_cval_el0);
-
- arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk);
-}
-
-static int erratum_set_next_event_virt(unsigned long evt,
- struct clock_event_device *clk)
-{
- erratum_set_next_event_generic(ARCH_TIMER_VIRT_ACCESS, evt, clk);
- return 0;
-}
-
-static int erratum_set_next_event_phys(unsigned long evt,
- struct clock_event_device *clk)
-{
- erratum_set_next_event_generic(ARCH_TIMER_PHYS_ACCESS, evt, clk);
- return 0;
-}
-#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */
-
static int arch_timer_set_next_event_virt(unsigned long evt,
struct clock_event_device *clk)
{
+ int ret;
+
+ if (erratum_handler(set_next_event_virt, ret, evt, clk))
+ return ret;
+
set_next_event(ARCH_TIMER_VIRT_ACCESS, evt, clk);
return 0;
}
@@ -392,6 +647,11 @@ static int arch_timer_set_next_event_virt(unsigned long evt,
static int arch_timer_set_next_event_phys(unsigned long evt,
struct clock_event_device *clk)
{
+ int ret;
+
+ if (erratum_handler(set_next_event_phys, ret, evt, clk))
+ return ret;
+
set_next_event(ARCH_TIMER_PHYS_ACCESS, evt, clk);
return 0;
}
@@ -410,25 +670,12 @@ static int arch_timer_set_next_event_phys_mem(unsigned long evt,
return 0;
}

-static void erratum_workaround_set_sne(struct clock_event_device *clk)
-{
-#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
- if (!static_branch_unlikely(&arch_timer_read_ool_enabled))
- return;
-
- if (arch_timer_uses_ppi == VIRT_PPI)
- clk->set_next_event = erratum_set_next_event_virt;
- else
- clk->set_next_event = erratum_set_next_event_phys;
-#endif
-}
-
static void __arch_timer_setup(unsigned type,
struct clock_event_device *clk)
{
clk->features = CLOCK_EVT_FEAT_ONESHOT;

- if (type == ARCH_CP15_TIMER) {
+ if (type == ARCH_TIMER_TYPE_CP15) {
if (arch_timer_c3stop)
clk->features |= CLOCK_EVT_FEAT_C3STOP;
clk->name = "arch_sys_timer";
@@ -436,14 +683,14 @@ static void __arch_timer_setup(unsigned type,
clk->cpumask = cpumask_of(smp_processor_id());
clk->irq = arch_timer_ppi[arch_timer_uses_ppi];
switch (arch_timer_uses_ppi) {
- case VIRT_PPI:
+ case ARCH_TIMER_VIRT_PPI:
clk->set_state_shutdown = arch_timer_shutdown_virt;
clk->set_state_oneshot_stopped = arch_timer_shutdown_virt;
clk->set_next_event = arch_timer_set_next_event_virt;
break;
- case PHYS_SECURE_PPI:
- case PHYS_NONSECURE_PPI:
- case HYP_PPI:
+ case ARCH_TIMER_PHYS_SECURE_PPI:
+ case ARCH_TIMER_PHYS_NONSECURE_PPI:
+ case ARCH_TIMER_HYP_PPI:
clk->set_state_shutdown = arch_timer_shutdown_phys;
clk->set_state_oneshot_stopped = arch_timer_shutdown_phys;
clk->set_next_event = arch_timer_set_next_event_phys;
@@ -452,7 +699,7 @@ static void __arch_timer_setup(unsigned type,
BUG();
}

- erratum_workaround_set_sne(clk);
+ arch_timer_check_ool_workaround(ate_match_local_cap_id, NULL);
} else {
clk->features |= CLOCK_EVT_FEAT_DYNIRQ;
clk->name = "arch_mem_timer";
@@ -508,23 +755,31 @@ static void arch_counter_set_user_access(void)
{
u32 cntkctl = arch_timer_get_cntkctl();

- /* Disable user access to the timers and the physical counter */
+ /* Disable user access to the timers and both counters */
/* Also disable virtual event stream */
cntkctl &= ~(ARCH_TIMER_USR_PT_ACCESS_EN
| ARCH_TIMER_USR_VT_ACCESS_EN
+ | ARCH_TIMER_USR_VCT_ACCESS_EN
| ARCH_TIMER_VIRT_EVT_EN
| ARCH_TIMER_USR_PCT_ACCESS_EN);

- /* Enable user access to the virtual counter */
- cntkctl |= ARCH_TIMER_USR_VCT_ACCESS_EN;
+ /*
+ * Enable user access to the virtual counter if it doesn't
+ * need to be workaround. The vdso may have been already
+ * disabled though.
+ */
+ if (arch_timer_this_cpu_has_cntvct_wa())
+ pr_info("CPU%d: Trapping CNTVCT access\n", smp_processor_id());
+ else
+ cntkctl |= ARCH_TIMER_USR_VCT_ACCESS_EN;

arch_timer_set_cntkctl(cntkctl);
}

static bool arch_timer_has_nonsecure_ppi(void)
{
- return (arch_timer_uses_ppi == PHYS_SECURE_PPI &&
- arch_timer_ppi[PHYS_NONSECURE_PPI]);
+ return (arch_timer_uses_ppi == ARCH_TIMER_PHYS_SECURE_PPI &&
+ arch_timer_ppi[ARCH_TIMER_PHYS_NONSECURE_PPI]);
}

static u32 check_ppi_trigger(int irq)
@@ -545,14 +800,15 @@ static int arch_timer_starting_cpu(unsigned int cpu)
struct clock_event_device *clk = this_cpu_ptr(arch_timer_evt);
u32 flags;

- __arch_timer_setup(ARCH_CP15_TIMER, clk);
+ __arch_timer_setup(ARCH_TIMER_TYPE_CP15, clk);

flags = check_ppi_trigger(arch_timer_ppi[arch_timer_uses_ppi]);
enable_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi], flags);

if (arch_timer_has_nonsecure_ppi()) {
- flags = check_ppi_trigger(arch_timer_ppi[PHYS_NONSECURE_PPI]);
- enable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI], flags);
+ flags = check_ppi_trigger(arch_timer_ppi[ARCH_TIMER_PHYS_NONSECURE_PPI]);
+ enable_percpu_irq(arch_timer_ppi[ARCH_TIMER_PHYS_NONSECURE_PPI],
+ flags);
}

arch_counter_set_user_access();
@@ -562,43 +818,39 @@ static int arch_timer_starting_cpu(unsigned int cpu)
return 0;
}

-static void
-arch_timer_detect_rate(void __iomem *cntbase, struct device_node *np)
+/*
+ * For historical reasons, when probing with DT we use whichever (non-zero)
+ * rate was probed first, and don't verify that others match. If the first node
+ * probed has a clock-frequency property, this overrides the HW register.
+ */
+static void arch_timer_of_configure_rate(u32 rate, struct device_node *np)
{
/* Who has more than one independent system counter? */
if (arch_timer_rate)
return;

- /*
- * Try to determine the frequency from the device tree or CNTFRQ,
- * if ACPI is enabled, get the frequency from CNTFRQ ONLY.
- */
- if (!acpi_disabled ||
- of_property_read_u32(np, "clock-frequency", &arch_timer_rate)) {
- if (cntbase)
- arch_timer_rate = readl_relaxed(cntbase + CNTFRQ);
- else
- arch_timer_rate = arch_timer_get_cntfrq();
- }
+ if (of_property_read_u32(np, "clock-frequency", &arch_timer_rate))
+ arch_timer_rate = rate;

/* Check the timer frequency. */
if (arch_timer_rate == 0)
- pr_warn("Architected timer frequency not available\n");
+ pr_warn("frequency not available\n");
}

static void arch_timer_banner(unsigned type)
{
- pr_info("Architected %s%s%s timer(s) running at %lu.%02luMHz (%s%s%s).\n",
- type & ARCH_CP15_TIMER ? "cp15" : "",
- type == (ARCH_CP15_TIMER | ARCH_MEM_TIMER) ? " and " : "",
- type & ARCH_MEM_TIMER ? "mmio" : "",
- (unsigned long)arch_timer_rate / 1000000,
- (unsigned long)(arch_timer_rate / 10000) % 100,
- type & ARCH_CP15_TIMER ?
- (arch_timer_uses_ppi == VIRT_PPI) ? "virt" : "phys" :
+ pr_info("%s%s%s timer(s) running at %lu.%02luMHz (%s%s%s).\n",
+ type & ARCH_TIMER_TYPE_CP15 ? "cp15" : "",
+ type == (ARCH_TIMER_TYPE_CP15 | ARCH_TIMER_TYPE_MEM) ?
+ " and " : "",
+ type & ARCH_TIMER_TYPE_MEM ? "mmio" : "",
+ (unsigned long)arch_timer_rate / 1000000,
+ (unsigned long)(arch_timer_rate / 10000) % 100,
+ type & ARCH_TIMER_TYPE_CP15 ?
+ (arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) ? "virt" : "phys" :
"",
- type == (ARCH_CP15_TIMER | ARCH_MEM_TIMER) ? "/" : "",
- type & ARCH_MEM_TIMER ?
+ type == (ARCH_TIMER_TYPE_CP15 | ARCH_TIMER_TYPE_MEM) ? "/" : "",
+ type & ARCH_TIMER_TYPE_MEM ?
arch_timer_mem_use_virtual ? "virt" : "phys" :
"");
}
@@ -621,37 +873,6 @@ static u64 arch_counter_get_cntvct_mem(void)
return ((u64) vct_hi << 32) | vct_lo;
}

-/*
- * Default to cp15 based access because arm64 uses this function for
- * sched_clock() before DT is probed and the cp15 method is guaranteed
- * to exist on arm64. arm doesn't use this before DT is probed so even
- * if we don't have the cp15 accessors we won't have a problem.
- */
-u64 (*arch_timer_read_counter)(void) = arch_counter_get_cntvct;
-
-static u64 arch_counter_read(struct clocksource *cs)
-{
- return arch_timer_read_counter();
-}
-
-static u64 arch_counter_read_cc(const struct cyclecounter *cc)
-{
- return arch_timer_read_counter();
-}
-
-static struct clocksource clocksource_counter = {
- .name = "arch_sys_counter",
- .rating = 400,
- .read = arch_counter_read,
- .mask = CLOCKSOURCE_MASK(56),
- .flags = CLOCK_SOURCE_IS_CONTINUOUS,
-};
-
-static struct cyclecounter cyclecounter __ro_after_init = {
- .read = arch_counter_read_cc,
- .mask = CLOCKSOURCE_MASK(56),
-};
-
static struct arch_timer_kvm_info arch_timer_kvm_info;

struct arch_timer_kvm_info *arch_timer_get_kvm_info(void)
@@ -664,22 +885,14 @@ static void __init arch_counter_register(unsigned type)
u64 start_count;

/* Register the CP15 based counter if we have one */
- if (type & ARCH_CP15_TIMER) {
- if (IS_ENABLED(CONFIG_ARM64) || arch_timer_uses_ppi == VIRT_PPI)
+ if (type & ARCH_TIMER_TYPE_CP15) {
+ if (IS_ENABLED(CONFIG_ARM64) ||
+ arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI)
arch_timer_read_counter = arch_counter_get_cntvct;
else
arch_timer_read_counter = arch_counter_get_cntpct;

- clocksource_counter.archdata.vdso_direct = true;
-
-#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
- /*
- * Don't use the vdso fastpath if errata require using
- * the out-of-line counter accessor.
- */
- if (static_branch_unlikely(&arch_timer_read_ool_enabled))
- clocksource_counter.archdata.vdso_direct = false;
-#endif
+ clocksource_counter.archdata.vdso_direct = vdso_default;
} else {
arch_timer_read_counter = arch_counter_get_cntvct_mem;
}
@@ -699,12 +912,11 @@ static void __init arch_counter_register(unsigned type)

static void arch_timer_stop(struct clock_event_device *clk)
{
- pr_debug("arch_timer_teardown disable IRQ%d cpu #%d\n",
- clk->irq, smp_processor_id());
+ pr_debug("disable IRQ%d cpu #%d\n", clk->irq, smp_processor_id());

disable_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi]);
if (arch_timer_has_nonsecure_ppi())
- disable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI]);
+ disable_percpu_irq(arch_timer_ppi[ARCH_TIMER_PHYS_NONSECURE_PPI]);

clk->set_state_shutdown(clk);
}
@@ -718,14 +930,14 @@ static int arch_timer_dying_cpu(unsigned int cpu)
}

#ifdef CONFIG_CPU_PM
-static unsigned int saved_cntkctl;
+static DEFINE_PER_CPU(unsigned long, saved_cntkctl);
static int arch_timer_cpu_pm_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
{
if (action == CPU_PM_ENTER)
- saved_cntkctl = arch_timer_get_cntkctl();
+ __this_cpu_write(saved_cntkctl, arch_timer_get_cntkctl());
else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT)
- arch_timer_set_cntkctl(saved_cntkctl);
+ arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
return NOTIFY_OK;
}

@@ -767,24 +979,24 @@ static int __init arch_timer_register(void)

ppi = arch_timer_ppi[arch_timer_uses_ppi];
switch (arch_timer_uses_ppi) {
- case VIRT_PPI:
+ case ARCH_TIMER_VIRT_PPI:
err = request_percpu_irq(ppi, arch_timer_handler_virt,
"arch_timer", arch_timer_evt);
break;
- case PHYS_SECURE_PPI:
- case PHYS_NONSECURE_PPI:
+ case ARCH_TIMER_PHYS_SECURE_PPI:
+ case ARCH_TIMER_PHYS_NONSECURE_PPI:
err = request_percpu_irq(ppi, arch_timer_handler_phys,
"arch_timer", arch_timer_evt);
- if (!err && arch_timer_ppi[PHYS_NONSECURE_PPI]) {
- ppi = arch_timer_ppi[PHYS_NONSECURE_PPI];
+ if (!err && arch_timer_has_nonsecure_ppi()) {
+ ppi = arch_timer_ppi[ARCH_TIMER_PHYS_NONSECURE_PPI];
err = request_percpu_irq(ppi, arch_timer_handler_phys,
"arch_timer", arch_timer_evt);
if (err)
- free_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI],
+ free_percpu_irq(arch_timer_ppi[ARCH_TIMER_PHYS_SECURE_PPI],
arch_timer_evt);
}
break;
- case HYP_PPI:
+ case ARCH_TIMER_HYP_PPI:
err = request_percpu_irq(ppi, arch_timer_handler_phys,
"arch_timer", arch_timer_evt);
break;
@@ -793,8 +1005,7 @@ static int __init arch_timer_register(void)
}

if (err) {
- pr_err("arch_timer: can't register interrupt %d (%d)\n",
- ppi, err);
+ pr_err("can't register interrupt %d (%d)\n", ppi, err);
goto out_free;
}

@@ -817,7 +1028,7 @@ static int __init arch_timer_register(void)
out_unreg_notify:
free_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi], arch_timer_evt);
if (arch_timer_has_nonsecure_ppi())
- free_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI],
+ free_percpu_irq(arch_timer_ppi[ARCH_TIMER_PHYS_NONSECURE_PPI],
arch_timer_evt);

out_free:
@@ -838,7 +1049,7 @@ static int __init arch_timer_mem_register(void __iomem *base, unsigned int irq)

t->base = base;
t->evt.irq = irq;
- __arch_timer_setup(ARCH_MEM_TIMER, &t->evt);
+ __arch_timer_setup(ARCH_TIMER_TYPE_MEM, &t->evt);

if (arch_timer_mem_use_virtual)
func = arch_timer_handler_virt_mem;
@@ -847,7 +1058,7 @@ static int __init arch_timer_mem_register(void __iomem *base, unsigned int irq)

ret = request_irq(irq, func, IRQF_TIMER, "arch_mem_timer", &t->evt);
if (ret) {
- pr_err("arch_timer: Failed to request mem timer irq\n");
+ pr_err("Failed to request mem timer irq\n");
kfree(t);
}

@@ -865,15 +1076,28 @@ static const struct of_device_id arch_timer_mem_of_match[] __initconst = {
{},
};

-static bool __init
-arch_timer_needs_probing(int type, const struct of_device_id *matches)
+static bool __init arch_timer_needs_of_probing(void)
{
struct device_node *dn;
bool needs_probing = false;
+ unsigned int mask = ARCH_TIMER_TYPE_CP15 | ARCH_TIMER_TYPE_MEM;
+
+ /* We have two timers, and both device-tree nodes are probed. */
+ if ((arch_timers_present & mask) == mask)
+ return false;

- dn = of_find_matching_node(NULL, matches);
- if (dn && of_device_is_available(dn) && !(arch_timers_present & type))
+ /*
+ * Only one type of timer is probed,
+ * check if we have another type of timer node in device-tree.
+ */
+ if (arch_timers_present & ARCH_TIMER_TYPE_CP15)
+ dn = of_find_matching_node(NULL, arch_timer_mem_of_match);
+ else
+ dn = of_find_matching_node(NULL, arch_timer_of_match);
+
+ if (dn && of_device_is_available(dn))
needs_probing = true;
+
of_node_put(dn);

return needs_probing;
@@ -881,96 +1105,66 @@ arch_timer_needs_probing(int type, const struct of_device_id *matches)

static int __init arch_timer_common_init(void)
{
- unsigned mask = ARCH_CP15_TIMER | ARCH_MEM_TIMER;
-
- /* Wait until both nodes are probed if we have two timers */
- if ((arch_timers_present & mask) != mask) {
- if (arch_timer_needs_probing(ARCH_MEM_TIMER, arch_timer_mem_of_match))
- return 0;
- if (arch_timer_needs_probing(ARCH_CP15_TIMER, arch_timer_of_match))
- return 0;
- }
-
arch_timer_banner(arch_timers_present);
arch_counter_register(arch_timers_present);
return arch_timer_arch_init();
}

-static int __init arch_timer_init(void)
+/**
+ * arch_timer_select_ppi() - Select suitable PPI for the current system.
+ *
+ * If HYP mode is available, we know that the physical timer
+ * has been configured to be accessible from PL1. Use it, so
+ * that a guest can use the virtual timer instead.
+ *
+ * On ARMv8.1 with VH extensions, the kernel runs in HYP. VHE
+ * accesses to CNTP_*_EL1 registers are silently redirected to
+ * their CNTHP_*_EL2 counterparts, and use a different PPI
+ * number.
+ *
+ * If no interrupt provided for virtual timer, we'll have to
+ * stick to the physical timer. It'd better be accessible...
+ * For arm64 we never use the secure interrupt.
+ *
+ * Return: a suitable PPI type for the current system.
+ */
+static enum arch_timer_ppi_nr __init arch_timer_select_ppi(void)
{
- int ret;
- /*
- * If HYP mode is available, we know that the physical timer
- * has been configured to be accessible from PL1. Use it, so
- * that a guest can use the virtual timer instead.
- *
- * If no interrupt provided for virtual timer, we'll have to
- * stick to the physical timer. It'd better be accessible...
- *
- * On ARMv8.1 with VH extensions, the kernel runs in HYP. VHE
- * accesses to CNTP_*_EL1 registers are silently redirected to
- * their CNTHP_*_EL2 counterparts, and use a different PPI
- * number.
- */
- if (is_hyp_mode_available() || !arch_timer_ppi[VIRT_PPI]) {
- bool has_ppi;
-
- if (is_kernel_in_hyp_mode()) {
- arch_timer_uses_ppi = HYP_PPI;
- has_ppi = !!arch_timer_ppi[HYP_PPI];
- } else {
- arch_timer_uses_ppi = PHYS_SECURE_PPI;
- has_ppi = (!!arch_timer_ppi[PHYS_SECURE_PPI] ||
- !!arch_timer_ppi[PHYS_NONSECURE_PPI]);
- }
-
- if (!has_ppi) {
- pr_warn("arch_timer: No interrupt available, giving up\n");
- return -EINVAL;
- }
- }
+ if (is_kernel_in_hyp_mode())
+ return ARCH_TIMER_HYP_PPI;

- ret = arch_timer_register();
- if (ret)
- return ret;
+ if (!is_hyp_mode_available() && arch_timer_ppi[ARCH_TIMER_VIRT_PPI])
+ return ARCH_TIMER_VIRT_PPI;

- ret = arch_timer_common_init();
- if (ret)
- return ret;
+ if (IS_ENABLED(CONFIG_ARM64))
+ return ARCH_TIMER_PHYS_NONSECURE_PPI;

- arch_timer_kvm_info.virtual_irq = arch_timer_ppi[VIRT_PPI];
-
- return 0;
+ return ARCH_TIMER_PHYS_SECURE_PPI;
}

static int __init arch_timer_of_init(struct device_node *np)
{
- int i;
+ int i, ret;
+ u32 rate;

- if (arch_timers_present & ARCH_CP15_TIMER) {
- pr_warn("arch_timer: multiple nodes in dt, skipping\n");
+ if (arch_timers_present & ARCH_TIMER_TYPE_CP15) {
+ pr_warn("multiple nodes in dt, skipping\n");
return 0;
}

- arch_timers_present |= ARCH_CP15_TIMER;
- for (i = PHYS_SECURE_PPI; i < MAX_TIMER_PPI; i++)
+ arch_timers_present |= ARCH_TIMER_TYPE_CP15;
+ for (i = ARCH_TIMER_PHYS_SECURE_PPI; i < ARCH_TIMER_MAX_TIMER_PPI; i++)
arch_timer_ppi[i] = irq_of_parse_and_map(np, i);

- arch_timer_detect_rate(NULL, np);
+ arch_timer_kvm_info.virtual_irq = arch_timer_ppi[ARCH_TIMER_VIRT_PPI];
+
+ rate = arch_timer_get_cntfrq();
+ arch_timer_of_configure_rate(rate, np);

arch_timer_c3stop = !of_property_read_bool(np, "always-on");

-#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
- for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) {
- if (of_property_read_bool(np, ool_workarounds[i].id)) {
- timer_unstable_counter_workaround = &ool_workarounds[i];
- static_branch_enable(&arch_timer_read_ool_enabled);
- pr_info("arch_timer: Enabling workaround for %s\n",
- timer_unstable_counter_workaround->id);
- break;
- }
- }
-#endif
+ /* Check for globally applicable workarounds */
+ arch_timer_check_ool_workaround(ate_match_dt, np);

/*
* If we cannot rely on firmware initializing the timer registers then
@@ -978,29 +1172,63 @@ static int __init arch_timer_of_init(struct device_node *np)
*/
if (IS_ENABLED(CONFIG_ARM) &&
of_property_read_bool(np, "arm,cpu-registers-not-fw-configured"))
- arch_timer_uses_ppi = PHYS_SECURE_PPI;
+ arch_timer_uses_ppi = ARCH_TIMER_PHYS_SECURE_PPI;
+ else
+ arch_timer_uses_ppi = arch_timer_select_ppi();
+
+ if (!arch_timer_ppi[arch_timer_uses_ppi]) {
+ pr_err("No interrupt available, giving up\n");
+ return -EINVAL;
+ }

/* On some systems, the counter stops ticking when in suspend. */
arch_counter_suspend_stop = of_property_read_bool(np,
"arm,no-tick-in-suspend");

- return arch_timer_init();
+ ret = arch_timer_register();
+ if (ret)
+ return ret;
+
+ if (arch_timer_needs_of_probing())
+ return 0;
+
+ return arch_timer_common_init();
}
CLOCKSOURCE_OF_DECLARE(armv7_arch_timer, "arm,armv7-timer", arch_timer_of_init);
CLOCKSOURCE_OF_DECLARE(armv8_arch_timer, "arm,armv8-timer", arch_timer_of_init);

-static int __init arch_timer_mem_init(struct device_node *np)
+static u32 __init
+arch_timer_mem_frame_get_cntfrq(struct arch_timer_mem_frame *frame)
{
- struct device_node *frame, *best_frame = NULL;
- void __iomem *cntctlbase, *base;
- unsigned int irq, ret = -EINVAL;
+ void __iomem *base;
+ u32 rate;
+
+ base = ioremap(frame->cntbase, frame->size);
+ if (!base) {
+ pr_err("Unable to map frame @ %pa\n", &frame->cntbase);
+ return 0;
+ }
+
+ rate = readl_relaxed(frame + CNTFRQ);
+
+ iounmap(frame);
+
+ return rate;
+}
+
+static struct arch_timer_mem_frame * __init
+arch_timer_mem_find_best_frame(struct arch_timer_mem *timer_mem)
+{
+ struct arch_timer_mem_frame *frame, *best_frame = NULL;
+ void __iomem *cntctlbase;
u32 cnttidr;
+ int i;

- arch_timers_present |= ARCH_MEM_TIMER;
- cntctlbase = of_iomap(np, 0);
+ cntctlbase = ioremap(timer_mem->cntctlbase, timer_mem->size);
if (!cntctlbase) {
- pr_err("arch_timer: Can't find CNTCTLBase\n");
- return -ENXIO;
+ pr_err("Can't map CNTCTLBase @ %pa\n",
+ &timer_mem->cntctlbase);
+ return NULL;
}

cnttidr = readl_relaxed(cntctlbase + CNTTIDR);
@@ -1009,25 +1237,20 @@ static int __init arch_timer_mem_init(struct device_node *np)
* Try to find a virtual capable frame. Otherwise fall back to a
* physical capable frame.
*/
- for_each_available_child_of_node(np, frame) {
- int n;
- u32 cntacr;
+ for (i = 0; i < ARCH_TIMER_MEM_MAX_FRAMES; i++) {
+ u32 cntacr = CNTACR_RFRQ | CNTACR_RWPT | CNTACR_RPCT |
+ CNTACR_RWVT | CNTACR_RVOFF | CNTACR_RVCT;

- if (of_property_read_u32(frame, "frame-number", &n)) {
- pr_err("arch_timer: Missing frame-number\n");
- of_node_put(frame);
- goto out;
- }
+ frame = &timer_mem->frame[i];
+ if (!frame->valid)
+ continue;

/* Try enabling everything, and see what sticks */
- cntacr = CNTACR_RFRQ | CNTACR_RWPT | CNTACR_RPCT |
- CNTACR_RWVT | CNTACR_RVOFF | CNTACR_RVCT;
- writel_relaxed(cntacr, cntctlbase + CNTACR(n));
- cntacr = readl_relaxed(cntctlbase + CNTACR(n));
+ writel_relaxed(cntacr, cntctlbase + CNTACR(i));
+ cntacr = readl_relaxed(cntctlbase + CNTACR(i));

- if ((cnttidr & CNTTIDR_VIRT(n)) &&
+ if ((cnttidr & CNTTIDR_VIRT(i)) &&
!(~cntacr & (CNTACR_RWVT | CNTACR_RVCT))) {
- of_node_put(best_frame);
best_frame = frame;
arch_timer_mem_use_virtual = true;
break;
@@ -1036,99 +1259,262 @@ static int __init arch_timer_mem_init(struct device_node *np)
if (~cntacr & (CNTACR_RWPT | CNTACR_RPCT))
continue;

- of_node_put(best_frame);
- best_frame = of_node_get(frame);
+ best_frame = frame;
}

- ret= -ENXIO;
- base = arch_counter_base = of_io_request_and_map(best_frame, 0,
- "arch_mem_timer");
- if (IS_ERR(base)) {
- pr_err("arch_timer: Can't map frame's registers\n");
- goto out;
- }
+ iounmap(cntctlbase);
+
+ if (!best_frame)
+ pr_err("Unable to find a suitable frame in timer @ %pa\n",
+ &timer_mem->cntctlbase);
+
+ return frame;
+}
+
+static int __init
+arch_timer_mem_frame_register(struct arch_timer_mem_frame *frame)
+{
+ void __iomem *base;
+ int ret, irq = 0;

if (arch_timer_mem_use_virtual)
- irq = irq_of_parse_and_map(best_frame, 1);
+ irq = frame->virt_irq;
else
- irq = irq_of_parse_and_map(best_frame, 0);
+ irq = frame->phys_irq;

- ret = -EINVAL;
if (!irq) {
- pr_err("arch_timer: Frame missing %s irq",
+ pr_err("Frame missing %s irq.\n",
arch_timer_mem_use_virtual ? "virt" : "phys");
- goto out;
+ return -EINVAL;
+ }
+
+ if (!request_mem_region(frame->cntbase, frame->size,
+ "arch_mem_timer"))
+ return -EBUSY;
+
+ base = ioremap(frame->cntbase, frame->size);
+ if (!base) {
+ pr_err("Can't map frame's registers\n");
+ return -ENXIO;
}

- arch_timer_detect_rate(base, np);
ret = arch_timer_mem_register(base, irq);
- if (ret)
+ if (ret) {
+ iounmap(base);
+ return ret;
+ }
+
+ arch_counter_base = base;
+ arch_timers_present |= ARCH_TIMER_TYPE_MEM;
+
+ return 0;
+}
+
+static int __init arch_timer_mem_of_init(struct device_node *np)
+{
+ struct arch_timer_mem *timer_mem;
+ struct arch_timer_mem_frame *frame;
+ struct device_node *frame_node;
+ struct resource res;
+ int ret = -EINVAL;
+ u32 rate;
+
+ timer_mem = kzalloc(sizeof(*timer_mem), GFP_KERNEL);
+ if (!timer_mem)
+ return -ENOMEM;
+
+ if (of_address_to_resource(np, 0, &res))
goto out;
+ timer_mem->cntctlbase = res.start;
+ timer_mem->size = resource_size(&res);

- return arch_timer_common_init();
+ for_each_available_child_of_node(np, frame_node) {
+ u32 n;
+ struct arch_timer_mem_frame *frame;
+
+ if (of_property_read_u32(frame_node, "frame-number", &n)) {
+ pr_err(FW_BUG "Missing frame-number.\n");
+ of_node_put(frame_node);
+ goto out;
+ }
+ if (n >= ARCH_TIMER_MEM_MAX_FRAMES) {
+ pr_err(FW_BUG "Wrong frame-number, only 0-%u are permitted.\n",
+ ARCH_TIMER_MEM_MAX_FRAMES - 1);
+ of_node_put(frame_node);
+ goto out;
+ }
+ frame = &timer_mem->frame[n];
+
+ if (frame->valid) {
+ pr_err(FW_BUG "Duplicated frame-number.\n");
+ of_node_put(frame_node);
+ goto out;
+ }
+
+ if (of_address_to_resource(frame_node, 0, &res)) {
+ of_node_put(frame_node);
+ goto out;
+ }
+ frame->cntbase = res.start;
+ frame->size = resource_size(&res);
+
+ frame->virt_irq = irq_of_parse_and_map(frame_node,
+ ARCH_TIMER_VIRT_SPI);
+ frame->phys_irq = irq_of_parse_and_map(frame_node,
+ ARCH_TIMER_PHYS_SPI);
+
+ frame->valid = true;
+ }
+
+ frame = arch_timer_mem_find_best_frame(timer_mem);
+ if (!frame) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ rate = arch_timer_mem_frame_get_cntfrq(frame);
+ arch_timer_of_configure_rate(rate, np);
+
+ ret = arch_timer_mem_frame_register(frame);
+ if (!ret && !arch_timer_needs_of_probing())
+ ret = arch_timer_common_init();
out:
- iounmap(cntctlbase);
- of_node_put(best_frame);
+ kfree(timer_mem);
return ret;
}
CLOCKSOURCE_OF_DECLARE(armv7_arch_timer_mem, "arm,armv7-timer-mem",
- arch_timer_mem_init);
+ arch_timer_mem_of_init);

-#ifdef CONFIG_ACPI
-static int __init map_generic_timer_interrupt(u32 interrupt, u32 flags)
+#ifdef CONFIG_ACPI_GTDT
+static int __init
+arch_timer_mem_verify_cntfrq(struct arch_timer_mem *timer_mem)
{
- int trigger, polarity;
+ struct arch_timer_mem_frame *frame;
+ u32 rate;
+ int i;

- if (!interrupt)
- return 0;
+ for (i = 0; i < ARCH_TIMER_MEM_MAX_FRAMES; i++) {
+ frame = &timer_mem->frame[i];

- trigger = (flags & ACPI_GTDT_INTERRUPT_MODE) ? ACPI_EDGE_SENSITIVE
- : ACPI_LEVEL_SENSITIVE;
+ if (!frame->valid)
+ continue;
+
+ rate = arch_timer_mem_frame_get_cntfrq(frame);
+ if (rate == arch_timer_rate)
+ continue;
+
+ pr_err(FW_BUG "CNTFRQ mismatch: frame @ %pa: (0x%08lx), CPU: (0x%08lx)\n",
+ &frame->cntbase,
+ (unsigned long)rate, (unsigned long)arch_timer_rate);

- polarity = (flags & ACPI_GTDT_INTERRUPT_POLARITY) ? ACPI_ACTIVE_LOW
- : ACPI_ACTIVE_HIGH;
+ return -EINVAL;
+ }

- return acpi_register_gsi(NULL, interrupt, trigger, polarity);
+ return 0;
}

-/* Initialize per-processor generic timer */
+static int __init arch_timer_mem_acpi_init(int platform_timer_count)
+{
+ struct arch_timer_mem *timers, *timer;
+ struct arch_timer_mem_frame *frame;
+ int timer_count, i, ret = 0;
+
+ timers = kcalloc(platform_timer_count, sizeof(*timers),
+ GFP_KERNEL);
+ if (!timers)
+ return -ENOMEM;
+
+ ret = acpi_arch_timer_mem_init(timers, &timer_count);
+ if (ret || !timer_count)
+ goto out;
+
+ for (i = 0; i < timer_count; i++) {
+ ret = arch_timer_mem_verify_cntfrq(&timers[i]);
+ if (ret) {
+ pr_err("Disabling MMIO timers due to CNTFRQ mismatch\n");
+ goto out;
+ }
+ }
+
+ /*
+ * While unlikely, it's theoretically possible that none of the frames
+ * in a timer expose the combination of feature we want.
+ */
+ for (i = i; i < timer_count; i++) {
+ timer = &timers[i];
+
+ frame = arch_timer_mem_find_best_frame(timer);
+ if (frame)
+ break;
+ }
+
+ if (frame)
+ ret = arch_timer_mem_frame_register(frame);
+out:
+ kfree(timers);
+ return ret;
+}
+
+/* Initialize per-processor generic timer and memory-mapped timer(if present) */
static int __init arch_timer_acpi_init(struct acpi_table_header *table)
{
- struct acpi_table_gtdt *gtdt;
+ int ret, platform_timer_count;

- if (arch_timers_present & ARCH_CP15_TIMER) {
- pr_warn("arch_timer: already initialized, skipping\n");
+ if (arch_timers_present & ARCH_TIMER_TYPE_CP15) {
+ pr_warn("already initialized, skipping\n");
return -EINVAL;
}

- gtdt = container_of(table, struct acpi_table_gtdt, header);
+ arch_timers_present |= ARCH_TIMER_TYPE_CP15;
+
+ ret = acpi_gtdt_init(table, &platform_timer_count);
+ if (ret) {
+ pr_err("Failed to init GTDT table.\n");
+ return ret;
+ }

- arch_timers_present |= ARCH_CP15_TIMER;
+ arch_timer_ppi[ARCH_TIMER_PHYS_NONSECURE_PPI] =
+ acpi_gtdt_map_ppi(ARCH_TIMER_PHYS_NONSECURE_PPI);

- arch_timer_ppi[PHYS_SECURE_PPI] =
- map_generic_timer_interrupt(gtdt->secure_el1_interrupt,
- gtdt->secure_el1_flags);
+ arch_timer_ppi[ARCH_TIMER_VIRT_PPI] =
+ acpi_gtdt_map_ppi(ARCH_TIMER_VIRT_PPI);

- arch_timer_ppi[PHYS_NONSECURE_PPI] =
- map_generic_timer_interrupt(gtdt->non_secure_el1_interrupt,
- gtdt->non_secure_el1_flags);
+ arch_timer_ppi[ARCH_TIMER_HYP_PPI] =
+ acpi_gtdt_map_ppi(ARCH_TIMER_HYP_PPI);

- arch_timer_ppi[VIRT_PPI] =
- map_generic_timer_interrupt(gtdt->virtual_timer_interrupt,
- gtdt->virtual_timer_flags);
+ arch_timer_kvm_info.virtual_irq = arch_timer_ppi[ARCH_TIMER_VIRT_PPI];

- arch_timer_ppi[HYP_PPI] =
- map_generic_timer_interrupt(gtdt->non_secure_el2_interrupt,
- gtdt->non_secure_el2_flags);
+ /*
+ * When probing via ACPI, we have no mechanism to override the sysreg
+ * CNTFRQ value. This *must* be correct.
+ */
+ arch_timer_rate = arch_timer_get_cntfrq();
+ if (!arch_timer_rate) {
+ pr_err(FW_BUG "frequency not available.\n");
+ return -EINVAL;
+ }

- /* Get the frequency from CNTFRQ */
- arch_timer_detect_rate(NULL, NULL);
+ arch_timer_uses_ppi = arch_timer_select_ppi();
+ if (!arch_timer_ppi[arch_timer_uses_ppi]) {
+ pr_err("No interrupt available, giving up\n");
+ return -EINVAL;
+ }

/* Always-on capability */
- arch_timer_c3stop = !(gtdt->non_secure_el1_flags & ACPI_GTDT_ALWAYS_ON);
+ arch_timer_c3stop = acpi_gtdt_c3stop(arch_timer_uses_ppi);

- arch_timer_init();
- return 0;
+ /* Check for globally applicable workarounds */
+ arch_timer_check_ool_workaround(ate_match_acpi_oem_info, table);
+
+ ret = arch_timer_register();
+ if (ret)
+ return ret;
+
+ if (platform_timer_count &&
+ arch_timer_mem_acpi_init(platform_timer_count))
+ pr_err("Failed to initialize memory-mapped timer.\n");
+
+ return arch_timer_common_init();
}
CLOCKSOURCE_ACPI_DECLARE(arch_timer, ACPI_SIG_GTDT, arch_timer_acpi_init);
#endif
diff --git a/drivers/clocksource/asm9260_timer.c b/drivers/clocksource/asm9260_timer.c
index 1ba871b7fe11..c6780830b8ac 100644
--- a/drivers/clocksource/asm9260_timer.c
+++ b/drivers/clocksource/asm9260_timer.c
@@ -193,7 +193,7 @@ static int __init asm9260_timer_init(struct device_node *np)

priv.base = of_io_request_and_map(np, 0, np->name);
if (IS_ERR(priv.base)) {
- pr_err("%s: unable to map resource", np->name);
+ pr_err("%s: unable to map resource\n", np->name);
return PTR_ERR(priv.base);
}

diff --git a/drivers/clocksource/bcm2835_timer.c b/drivers/clocksource/bcm2835_timer.c
index f2f29d2be1cf..dce44307469e 100644
--- a/drivers/clocksource/bcm2835_timer.c
+++ b/drivers/clocksource/bcm2835_timer.c
@@ -89,13 +89,13 @@ static int __init bcm2835_timer_init(struct device_node *node)

base = of_iomap(node, 0);
if (!base) {
- pr_err("Can't remap registers");
+ pr_err("Can't remap registers\n");
return -ENXIO;
}

ret = of_property_read_u32(node, "clock-frequency", &freq);
if (ret) {
- pr_err("Can't read clock-frequency");
+ pr_err("Can't read clock-frequency\n");
goto err_iounmap;
}

@@ -107,7 +107,7 @@ static int __init bcm2835_timer_init(struct device_node *node)

irq = irq_of_parse_and_map(node, DEFAULT_TIMER);
if (irq <= 0) {
- pr_err("Can't parse IRQ");
+ pr_err("Can't parse IRQ\n");
ret = -EINVAL;
goto err_iounmap;
}
diff --git a/drivers/clocksource/bcm_kona_timer.c b/drivers/clocksource/bcm_kona_timer.c
index 92f6e4deee74..fda5e1476638 100644
--- a/drivers/clocksource/bcm_kona_timer.c
+++ b/drivers/clocksource/bcm_kona_timer.c
@@ -179,7 +179,7 @@ static int __init kona_timer_init(struct device_node *node)
} else if (!of_property_read_u32(node, "clock-frequency", &freq)) {
arch_timer_rate = freq;
} else {
- pr_err("Kona Timer v1 unable to determine clock-frequency");
+ pr_err("Kona Timer v1 unable to determine clock-frequency\n");
return -EINVAL;
}

diff --git a/drivers/clocksource/clksrc-probe.c b/drivers/clocksource/clksrc-probe.c
index bc62be97f0a8..ac701ffb8d59 100644
--- a/drivers/clocksource/clksrc-probe.c
+++ b/drivers/clocksource/clksrc-probe.c
@@ -40,7 +40,7 @@ void __init clocksource_probe(void)

ret = init_func_ret(np);
if (ret) {
- pr_err("Failed to initialize '%s': %d",
+ pr_err("Failed to initialize '%s': %d\n",
of_node_full_name(np), ret);
continue;
}
diff --git a/drivers/clocksource/dw_apb_timer.c b/drivers/clocksource/dw_apb_timer.c
index 63e4f5519577..1f5f734e4919 100644
--- a/drivers/clocksource/dw_apb_timer.c
+++ b/drivers/clocksource/dw_apb_timer.c
@@ -101,7 +101,7 @@ static irqreturn_t dw_apb_clockevent_irq(int irq, void *data)
struct dw_apb_clock_event_device *dw_ced = ced_to_dw_apb_ced(evt);

if (!evt->event_handler) {
- pr_info("Spurious APBT timer interrupt %d", irq);
+ pr_info("Spurious APBT timer interrupt %d\n", irq);
return IRQ_NONE;
}

@@ -257,7 +257,9 @@ dw_apb_clockevent_init(int cpu, const char *name, unsigned rating,
clockevents_calc_mult_shift(&dw_ced->ced, freq, APBT_MIN_PERIOD);
dw_ced->ced.max_delta_ns = clockevent_delta2ns(0x7fffffff,
&dw_ced->ced);
+ dw_ced->ced.max_delta_ticks = 0x7fffffff;
dw_ced->ced.min_delta_ns = clockevent_delta2ns(5000, &dw_ced->ced);
+ dw_ced->ced.min_delta_ticks = 5000;
dw_ced->ced.cpumask = cpumask_of(cpu);
dw_ced->ced.features = CLOCK_EVT_FEAT_PERIODIC |
CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_DYNIRQ;
diff --git a/drivers/clocksource/em_sti.c b/drivers/clocksource/em_sti.c
index aff87df07449..bc48cbf6a795 100644
--- a/drivers/clocksource/em_sti.c
+++ b/drivers/clocksource/em_sti.c
@@ -78,15 +78,12 @@ static int em_sti_enable(struct em_sti_priv *p)
int ret;

/* enable clock */
- ret = clk_prepare_enable(p->clk);
+ ret = clk_enable(p->clk);
if (ret) {
dev_err(&p->pdev->dev, "cannot enable clock\n");
return ret;
}

- /* configure channel, periodic mode and maximum timeout */
- p->rate = clk_get_rate(p->clk);
-
/* reset the counter */
em_sti_write(p, STI_SET_H, 0x40000000);
em_sti_write(p, STI_SET_L, 0x00000000);
@@ -107,7 +104,7 @@ static void em_sti_disable(struct em_sti_priv *p)
em_sti_write(p, STI_INTENCLR, 3);

/* stop clock */
- clk_disable_unprepare(p->clk);
+ clk_disable(p->clk);
}

static u64 em_sti_count(struct em_sti_priv *p)
@@ -205,13 +202,9 @@ static u64 em_sti_clocksource_read(struct clocksource *cs)

static int em_sti_clocksource_enable(struct clocksource *cs)
{
- int ret;
struct em_sti_priv *p = cs_to_em_sti(cs);

- ret = em_sti_start(p, USER_CLOCKSOURCE);
- if (!ret)
- __clocksource_update_freq_hz(cs, p->rate);
- return ret;
+ return em_sti_start(p, USER_CLOCKSOURCE);
}

static void em_sti_clocksource_disable(struct clocksource *cs)
@@ -240,8 +233,7 @@ static int em_sti_register_clocksource(struct em_sti_priv *p)

dev_info(&p->pdev->dev, "used as clock source\n");

- /* Register with dummy 1 Hz value, gets updated in ->enable() */
- clocksource_register_hz(cs, 1);
+ clocksource_register_hz(cs, p->rate);
return 0;
}

@@ -263,7 +255,6 @@ static int em_sti_clock_event_set_oneshot(struct clock_event_device *ced)

dev_info(&p->pdev->dev, "used for oneshot clock events\n");
em_sti_start(p, USER_CLOCKEVENT);
- clockevents_config(&p->ced, p->rate);
return 0;
}

@@ -294,8 +285,7 @@ static void em_sti_register_clockevent(struct em_sti_priv *p)

dev_info(&p->pdev->dev, "used for clock events\n");

- /* Register with dummy 1 Hz value, gets updated in ->set_state_oneshot() */
- clockevents_config_and_register(ced, 1, 2, 0xffffffff);
+ clockevents_config_and_register(ced, p->rate, 2, 0xffffffff);
}

static int em_sti_probe(struct platform_device *pdev)
@@ -303,6 +293,7 @@ static int em_sti_probe(struct platform_device *pdev)
struct em_sti_priv *p;
struct resource *res;
int irq;
+ int ret;

p = devm_kzalloc(&pdev->dev, sizeof(*p), GFP_KERNEL);
if (p == NULL)
@@ -323,6 +314,13 @@ static int em_sti_probe(struct platform_device *pdev)
if (IS_ERR(p->base))
return PTR_ERR(p->base);

+ if (devm_request_irq(&pdev->dev, irq, em_sti_interrupt,
+ IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,
+ dev_name(&pdev->dev), p)) {
+ dev_err(&pdev->dev, "failed to request low IRQ\n");
+ return -ENOENT;
+ }
+
/* get hold of clock */
p->clk = devm_clk_get(&pdev->dev, "sclk");
if (IS_ERR(p->clk)) {
@@ -330,12 +328,20 @@ static int em_sti_probe(struct platform_device *pdev)
return PTR_ERR(p->clk);
}

- if (devm_request_irq(&pdev->dev, irq, em_sti_interrupt,
- IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,
- dev_name(&pdev->dev), p)) {
- dev_err(&pdev->dev, "failed to request low IRQ\n");
- return -ENOENT;
+ ret = clk_prepare(p->clk);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "cannot prepare clock\n");
+ return ret;
+ }
+
+ ret = clk_enable(p->clk);
+ if (ret < 0) {
+ dev_err(&p->pdev->dev, "cannot enable clock\n");
+ clk_unprepare(p->clk);
+ return ret;
}
+ p->rate = clk_get_rate(p->clk);
+ clk_disable(p->clk);

raw_spin_lock_init(&p->lock);
em_sti_register_clockevent(p);
diff --git a/drivers/clocksource/h8300_timer8.c b/drivers/clocksource/h8300_timer8.c
index 546bb180f5a4..804c489531d6 100644
--- a/drivers/clocksource/h8300_timer8.c
+++ b/drivers/clocksource/h8300_timer8.c
@@ -101,15 +101,7 @@ static inline struct timer8_priv *ced_to_priv(struct clock_event_device *ced)

static void timer8_clock_event_start(struct timer8_priv *p, unsigned long delta)
{
- struct clock_event_device *ced = &p->ced;
-
timer8_start(p);
-
- ced->shift = 32;
- ced->mult = div_sc(p->rate, NSEC_PER_SEC, ced->shift);
- ced->max_delta_ns = clockevent_delta2ns(0xffff, ced);
- ced->min_delta_ns = clockevent_delta2ns(0x0001, ced);
-
timer8_set_next(p, delta);
}

diff --git a/drivers/clocksource/meson6_timer.c b/drivers/clocksource/meson6_timer.c
index 52af591a9fc7..39d21f693a33 100644
--- a/drivers/clocksource/meson6_timer.c
+++ b/drivers/clocksource/meson6_timer.c
@@ -133,13 +133,13 @@ static int __init meson6_timer_init(struct device_node *node)

timer_base = of_io_request_and_map(node, 0, "meson6-timer");
if (IS_ERR(timer_base)) {
- pr_err("Can't map registers");
+ pr_err("Can't map registers\n");
return -ENXIO;
}

irq = irq_of_parse_and_map(node, 0);
if (irq <= 0) {
- pr_err("Can't parse IRQ");
+ pr_err("Can't parse IRQ\n");
return -EINVAL;
}

diff --git a/drivers/clocksource/metag_generic.c b/drivers/clocksource/metag_generic.c
index 6fcf96540631..3e5fa2f62d5f 100644
--- a/drivers/clocksource/metag_generic.c
+++ b/drivers/clocksource/metag_generic.c
@@ -114,7 +114,9 @@ static int arch_timer_starting_cpu(unsigned int cpu)

clk->mult = div_sc(hwtimer_freq, NSEC_PER_SEC, clk->shift);
clk->max_delta_ns = clockevent_delta2ns(0x7fffffff, clk);
+ clk->max_delta_ticks = 0x7fffffff;
clk->min_delta_ns = clockevent_delta2ns(0xf, clk);
+ clk->min_delta_ticks = 0xf;
clk->cpumask = cpumask_of(cpu);

clockevents_register_device(clk);
diff --git a/drivers/clocksource/mips-gic-timer.c b/drivers/clocksource/mips-gic-timer.c
index d9ef7a61e093..3f52ee219923 100644
--- a/drivers/clocksource/mips-gic-timer.c
+++ b/drivers/clocksource/mips-gic-timer.c
@@ -154,19 +154,6 @@ static int __init __gic_clocksource_init(void)
return ret;
}

-void __init gic_clocksource_init(unsigned int frequency)
-{
- gic_frequency = frequency;
- gic_timer_irq = MIPS_GIC_IRQ_BASE +
- GIC_LOCAL_TO_HWIRQ(GIC_LOCAL_INT_COMPARE);
-
- __gic_clocksource_init();
- gic_clockevent_init();
-
- /* And finally start the counter */
- gic_start_count();
-}
-
static int __init gic_clocksource_of_init(struct device_node *node)
{
struct clk *clk;
@@ -174,7 +161,7 @@ static int __init gic_clocksource_of_init(struct device_node *node)

if (!gic_present || !node->parent ||
!of_device_is_compatible(node->parent, "mti,gic")) {
- pr_warn("No DT definition for the mips gic driver");
+ pr_warn("No DT definition for the mips gic driver\n");
return -ENXIO;
}

diff --git a/drivers/clocksource/nomadik-mtu.c b/drivers/clocksource/nomadik-mtu.c
index 3c124d1ca600..7d44de304f37 100644
--- a/drivers/clocksource/nomadik-mtu.c
+++ b/drivers/clocksource/nomadik-mtu.c
@@ -260,25 +260,25 @@ static int __init nmdk_timer_of_init(struct device_node *node)

base = of_iomap(node, 0);
if (!base) {
- pr_err("Can't remap registers");
+ pr_err("Can't remap registers\n");
return -ENXIO;
}

pclk = of_clk_get_by_name(node, "apb_pclk");
if (IS_ERR(pclk)) {
- pr_err("could not get apb_pclk");
+ pr_err("could not get apb_pclk\n");
return PTR_ERR(pclk);
}

clk = of_clk_get_by_name(node, "timclk");
if (IS_ERR(clk)) {
- pr_err("could not get timclk");
+ pr_err("could not get timclk\n");
return PTR_ERR(clk);
}

irq = irq_of_parse_and_map(node, 0);
if (irq <= 0) {
- pr_err("Can't parse IRQ");
+ pr_err("Can't parse IRQ\n");
return -EINVAL;
}

diff --git a/drivers/clocksource/numachip.c b/drivers/clocksource/numachip.c
index 4e0f11fd2617..6a20dc8b253f 100644
--- a/drivers/clocksource/numachip.c
+++ b/drivers/clocksource/numachip.c
@@ -51,7 +51,9 @@ static struct clock_event_device numachip2_clockevent = {
.mult = 1,
.shift = 0,
.min_delta_ns = 1250,
+ .min_delta_ticks = 1250,
.max_delta_ns = LONG_MAX,
+ .max_delta_ticks = LONG_MAX,
};

static void numachip_timer_interrupt(void)
diff --git a/drivers/clocksource/pxa_timer.c b/drivers/clocksource/pxa_timer.c
index 1c24de215c14..a10fa667325f 100644
--- a/drivers/clocksource/pxa_timer.c
+++ b/drivers/clocksource/pxa_timer.c
@@ -166,14 +166,14 @@ static int __init pxa_timer_common_init(int irq, unsigned long clock_tick_rate)

ret = setup_irq(irq, &pxa_ost0_irq);
if (ret) {
- pr_err("Failed to setup irq");
+ pr_err("Failed to setup irq\n");
return ret;
}

ret = clocksource_mmio_init(timer_base + OSCR, "oscr0", clock_tick_rate, 200,
32, clocksource_mmio_readl_up);
if (ret) {
- pr_err("Failed to init clocksource");
+ pr_err("Failed to init clocksource\n");
return ret;
}

@@ -203,7 +203,7 @@ static int __init pxa_timer_dt_init(struct device_node *np)

ret = clk_prepare_enable(clk);
if (ret) {
- pr_crit("Failed to prepare clock");
+ pr_crit("Failed to prepare clock\n");
return ret;
}

diff --git a/drivers/clocksource/rockchip_timer.c b/drivers/clocksource/rockchip_timer.c
index 23e267acba25..49c02be50eca 100644
--- a/drivers/clocksource/rockchip_timer.c
+++ b/drivers/clocksource/rockchip_timer.c
@@ -11,6 +11,8 @@
#include <linux/clockchips.h>
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/sched_clock.h>
+#include <linux/slab.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
@@ -19,6 +21,8 @@

#define TIMER_LOAD_COUNT0 0x00
#define TIMER_LOAD_COUNT1 0x04
+#define TIMER_CURRENT_VALUE0 0x08
+#define TIMER_CURRENT_VALUE1 0x0C
#define TIMER_CONTROL_REG3288 0x10
#define TIMER_CONTROL_REG3399 0x1c
#define TIMER_INT_STATUS 0x18
@@ -29,103 +33,118 @@
#define TIMER_MODE_USER_DEFINED_COUNT (1 << 1)
#define TIMER_INT_UNMASK (1 << 2)

-struct bc_timer {
- struct clock_event_device ce;
+struct rk_timer {
void __iomem *base;
void __iomem *ctrl;
+ struct clk *clk;
+ struct clk *pclk;
u32 freq;
+ int irq;
};

-static struct bc_timer bc_timer;
-
-static inline struct bc_timer *rk_timer(struct clock_event_device *ce)
-{
- return container_of(ce, struct bc_timer, ce);
-}
+struct rk_clkevt {
+ struct clock_event_device ce;
+ struct rk_timer timer;
+};

-static inline void __iomem *rk_base(struct clock_event_device *ce)
-{
- return rk_timer(ce)->base;
-}
+static struct rk_clkevt *rk_clkevt;
+static struct rk_timer *rk_clksrc;

-static inline void __iomem *rk_ctrl(struct clock_event_device *ce)
+static inline struct rk_timer *rk_timer(struct clock_event_device *ce)
{
- return rk_timer(ce)->ctrl;
+ return &container_of(ce, struct rk_clkevt, ce)->timer;
}

-static inline void rk_timer_disable(struct clock_event_device *ce)
+static inline void rk_timer_disable(struct rk_timer *timer)
{
- writel_relaxed(TIMER_DISABLE, rk_ctrl(ce));
+ writel_relaxed(TIMER_DISABLE, timer->ctrl);
}

-static inline void rk_timer_enable(struct clock_event_device *ce, u32 flags)
+static inline void rk_timer_enable(struct rk_timer *timer, u32 flags)
{
- writel_relaxed(TIMER_ENABLE | TIMER_INT_UNMASK | flags,
- rk_ctrl(ce));
+ writel_relaxed(TIMER_ENABLE | flags, timer->ctrl);
}

static void rk_timer_update_counter(unsigned long cycles,
- struct clock_event_device *ce)
+ struct rk_timer *timer)
{
- writel_relaxed(cycles, rk_base(ce) + TIMER_LOAD_COUNT0);
- writel_relaxed(0, rk_base(ce) + TIMER_LOAD_COUNT1);
+ writel_relaxed(cycles, timer->base + TIMER_LOAD_COUNT0);
+ writel_relaxed(0, timer->base + TIMER_LOAD_COUNT1);
}

-static void rk_timer_interrupt_clear(struct clock_event_device *ce)
+static void rk_timer_interrupt_clear(struct rk_timer *timer)
{
- writel_relaxed(1, rk_base(ce) + TIMER_INT_STATUS);
+ writel_relaxed(1, timer->base + TIMER_INT_STATUS);
}

static inline int rk_timer_set_next_event(unsigned long cycles,
struct clock_event_device *ce)
{
- rk_timer_disable(ce);
- rk_timer_update_counter(cycles, ce);
- rk_timer_enable(ce, TIMER_MODE_USER_DEFINED_COUNT);
+ struct rk_timer *timer = rk_timer(ce);
+
+ rk_timer_disable(timer);
+ rk_timer_update_counter(cycles, timer);
+ rk_timer_enable(timer, TIMER_MODE_USER_DEFINED_COUNT |
+ TIMER_INT_UNMASK);
return 0;
}

static int rk_timer_shutdown(struct clock_event_device *ce)
{
- rk_timer_disable(ce);
+ struct rk_timer *timer = rk_timer(ce);
+
+ rk_timer_disable(timer);
return 0;
}

static int rk_timer_set_periodic(struct clock_event_device *ce)
{
- rk_timer_disable(ce);
- rk_timer_update_counter(rk_timer(ce)->freq / HZ - 1, ce);
- rk_timer_enable(ce, TIMER_MODE_FREE_RUNNING);
+ struct rk_timer *timer = rk_timer(ce);
+
+ rk_timer_disable(timer);
+ rk_timer_update_counter(timer->freq / HZ - 1, timer);
+ rk_timer_enable(timer, TIMER_MODE_FREE_RUNNING | TIMER_INT_UNMASK);
return 0;
}

static irqreturn_t rk_timer_interrupt(int irq, void *dev_id)
{
struct clock_event_device *ce = dev_id;
+ struct rk_timer *timer = rk_timer(ce);

- rk_timer_interrupt_clear(ce);
+ rk_timer_interrupt_clear(timer);

if (clockevent_state_oneshot(ce))
- rk_timer_disable(ce);
+ rk_timer_disable(timer);

ce->event_handler(ce);

return IRQ_HANDLED;
}

-static int __init rk_timer_init(struct device_node *np, u32 ctrl_reg)
+static u64 notrace rk_timer_sched_read(void)
+{
+ return ~readl_relaxed(rk_clksrc->base + TIMER_CURRENT_VALUE0);
+}
+
+static int __init
+rk_timer_probe(struct rk_timer *timer, struct device_node *np)
{
- struct clock_event_device *ce = &bc_timer.ce;
struct clk *timer_clk;
struct clk *pclk;
int ret = -EINVAL, irq;
+ u32 ctrl_reg = TIMER_CONTROL_REG3288;

- bc_timer.base = of_iomap(np, 0);
- if (!bc_timer.base) {
+ timer->base = of_iomap(np, 0);
+ if (!timer->base) {
pr_err("Failed to get base address for '%s'\n", TIMER_NAME);
return -ENXIO;
}
- bc_timer.ctrl = bc_timer.base + ctrl_reg;
+
+ if (of_device_is_compatible(np, "rockchip,rk3399-timer"))
+ ctrl_reg = TIMER_CONTROL_REG3399;
+
+ timer->ctrl = timer->base + ctrl_reg;

pclk = of_clk_get_by_name(np, "pclk");
if (IS_ERR(pclk)) {
@@ -139,6 +158,7 @@ static int __init rk_timer_init(struct device_node *np, u32 ctrl_reg)
pr_err("Failed to enable pclk for '%s'\n", TIMER_NAME);
goto out_unmap;
}
+ timer->pclk = pclk;

timer_clk = of_clk_get_by_name(np, "timer");
if (IS_ERR(timer_clk)) {
@@ -152,8 +172,9 @@ static int __init rk_timer_init(struct device_node *np, u32 ctrl_reg)
pr_err("Failed to enable timer clock\n");
goto out_timer_clk;
}
+ timer->clk = timer_clk;

- bc_timer.freq = clk_get_rate(timer_clk);
+ timer->freq = clk_get_rate(timer_clk);

irq = irq_of_parse_and_map(np, 0);
if (!irq) {
@@ -161,51 +182,126 @@ static int __init rk_timer_init(struct device_node *np, u32 ctrl_reg)
pr_err("Failed to map interrupts for '%s'\n", TIMER_NAME);
goto out_irq;
}
+ timer->irq = irq;
+
+ rk_timer_interrupt_clear(timer);
+ rk_timer_disable(timer);
+ return 0;
+
+out_irq:
+ clk_disable_unprepare(timer_clk);
+out_timer_clk:
+ clk_disable_unprepare(pclk);
+out_unmap:
+ iounmap(timer->base);
+
+ return ret;
+}
+
+static void __init rk_timer_cleanup(struct rk_timer *timer)
+{
+ clk_disable_unprepare(timer->clk);
+ clk_disable_unprepare(timer->pclk);
+ iounmap(timer->base);
+}
+
+static int __init rk_clkevt_init(struct device_node *np)
+{
+ struct clock_event_device *ce;
+ int ret = -EINVAL;
+
+ rk_clkevt = kzalloc(sizeof(struct rk_clkevt), GFP_KERNEL);
+ if (!rk_clkevt) {
+ ret = -ENOMEM;
+ goto out;
+ }

+ ret = rk_timer_probe(&rk_clkevt->timer, np);
+ if (ret)
+ goto out_probe;
+
+ ce = &rk_clkevt->ce;
ce->name = TIMER_NAME;
ce->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT |
CLOCK_EVT_FEAT_DYNIRQ;
ce->set_next_event = rk_timer_set_next_event;
ce->set_state_shutdown = rk_timer_shutdown;
ce->set_state_periodic = rk_timer_set_periodic;
- ce->irq = irq;
+ ce->irq = rk_clkevt->timer.irq;
ce->cpumask = cpu_possible_mask;
ce->rating = 250;

- rk_timer_interrupt_clear(ce);
- rk_timer_disable(ce);
-
- ret = request_irq(irq, rk_timer_interrupt, IRQF_TIMER, TIMER_NAME, ce);
+ ret = request_irq(rk_clkevt->timer.irq, rk_timer_interrupt, IRQF_TIMER,
+ TIMER_NAME, ce);
if (ret) {
- pr_err("Failed to initialize '%s': %d\n", TIMER_NAME, ret);
+ pr_err("Failed to initialize '%s': %d\n",
+ TIMER_NAME, ret);
goto out_irq;
}

- clockevents_config_and_register(ce, bc_timer.freq, 1, UINT_MAX);
-
+ clockevents_config_and_register(&rk_clkevt->ce,
+ rk_clkevt->timer.freq, 1, UINT_MAX);
return 0;

out_irq:
- clk_disable_unprepare(timer_clk);
-out_timer_clk:
- clk_disable_unprepare(pclk);
-out_unmap:
- iounmap(bc_timer.base);
-
+ rk_timer_cleanup(&rk_clkevt->timer);
+out_probe:
+ kfree(rk_clkevt);
+out:
+ /* Leave rk_clkevt not NULL to prevent future init */
+ rk_clkevt = ERR_PTR(ret);
return ret;
}

-static int __init rk3288_timer_init(struct device_node *np)
+static int __init rk_clksrc_init(struct device_node *np)
{
- return rk_timer_init(np, TIMER_CONTROL_REG3288);
+ int ret = -EINVAL;
+
+ rk_clksrc = kzalloc(sizeof(struct rk_timer), GFP_KERNEL);
+ if (!rk_clksrc) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ ret = rk_timer_probe(rk_clksrc, np);
+ if (ret)
+ goto out_probe;
+
+ rk_timer_update_counter(UINT_MAX, rk_clksrc);
+ rk_timer_enable(rk_clksrc, 0);
+
+ ret = clocksource_mmio_init(rk_clksrc->base + TIMER_CURRENT_VALUE0,
+ TIMER_NAME, rk_clksrc->freq, 250, 32,
+ clocksource_mmio_readl_down);
+ if (ret) {
+ pr_err("Failed to register clocksource");
+ goto out_clocksource;
+ }
+
+ sched_clock_register(rk_timer_sched_read, 32, rk_clksrc->freq);
+ return 0;
+
+out_clocksource:
+ rk_timer_cleanup(rk_clksrc);
+out_probe:
+ kfree(rk_clksrc);
+out:
+ /* Leave rk_clksrc not NULL to prevent future init */
+ rk_clksrc = ERR_PTR(ret);
+ return ret;
}

-static int __init rk3399_timer_init(struct device_node *np)
+static int __init rk_timer_init(struct device_node *np)
{
- return rk_timer_init(np, TIMER_CONTROL_REG3399);
+ if (!rk_clkevt)
+ return rk_clkevt_init(np);
+
+ if (!rk_clksrc)
+ return rk_clksrc_init(np);
+
+ pr_err("Too many timer definitions for '%s'\n", TIMER_NAME);
+ return -EINVAL;
}

-CLOCKSOURCE_OF_DECLARE(rk3288_timer, "rockchip,rk3288-timer",
- rk3288_timer_init);
-CLOCKSOURCE_OF_DECLARE(rk3399_timer, "rockchip,rk3399-timer",
- rk3399_timer_init);
+CLOCKSOURCE_OF_DECLARE(rk3288_timer, "rockchip,rk3288-timer", rk_timer_init);
+CLOCKSOURCE_OF_DECLARE(rk3399_timer, "rockchip,rk3399-timer", rk_timer_init);
diff --git a/drivers/clocksource/samsung_pwm_timer.c b/drivers/clocksource/samsung_pwm_timer.c
index 0093ece661fe..a68e6538c809 100644
--- a/drivers/clocksource/samsung_pwm_timer.c
+++ b/drivers/clocksource/samsung_pwm_timer.c
@@ -385,7 +385,7 @@ static int __init _samsung_pwm_clocksource_init(void)
mask = ~pwm.variant.output_mask & ((1 << SAMSUNG_PWM_NUM) - 1);
channel = fls(mask) - 1;
if (channel < 0) {
- pr_crit("failed to find PWM channel for clocksource");
+ pr_crit("failed to find PWM channel for clocksource\n");
return -EINVAL;
}
pwm.source_id = channel;
@@ -393,7 +393,7 @@ static int __init _samsung_pwm_clocksource_init(void)
mask &= ~(1 << channel);
channel = fls(mask) - 1;
if (channel < 0) {
- pr_crit("failed to find PWM channel for clock event");
+ pr_crit("failed to find PWM channel for clock event\n");
return -EINVAL;
}
pwm.event_id = channel;
@@ -448,7 +448,7 @@ static int __init samsung_pwm_alloc(struct device_node *np,

pwm.timerclk = of_clk_get_by_name(np, "timers");
if (IS_ERR(pwm.timerclk)) {
- pr_crit("failed to get timers clock for timer");
+ pr_crit("failed to get timers clock for timer\n");
return PTR_ERR(pwm.timerclk);
}

diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
index 28757edf6aca..e09e8bf0bb9b 100644
--- a/drivers/clocksource/sh_cmt.c
+++ b/drivers/clocksource/sh_cmt.c
@@ -103,7 +103,6 @@ struct sh_cmt_channel {
unsigned long match_value;
unsigned long next_match_value;
unsigned long max_match_value;
- unsigned long rate;
raw_spinlock_t lock;
struct clock_event_device ced;
struct clocksource cs;
@@ -118,6 +117,7 @@ struct sh_cmt_device {

void __iomem *mapbase;
struct clk *clk;
+ unsigned long rate;

raw_spinlock_t lock; /* Protect the shared start/stop register */

@@ -320,7 +320,7 @@ static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start)
raw_spin_unlock_irqrestore(&ch->cmt->lock, flags);
}

-static int sh_cmt_enable(struct sh_cmt_channel *ch, unsigned long *rate)
+static int sh_cmt_enable(struct sh_cmt_channel *ch)
{
int k, ret;

@@ -340,11 +340,9 @@ static int sh_cmt_enable(struct sh_cmt_channel *ch, unsigned long *rate)

/* configure channel, periodic mode and maximum timeout */
if (ch->cmt->info->width == 16) {
- *rate = clk_get_rate(ch->cmt->clk) / 512;
sh_cmt_write_cmcsr(ch, SH_CMT16_CMCSR_CMIE |
SH_CMT16_CMCSR_CKS512);
} else {
- *rate = clk_get_rate(ch->cmt->clk) / 8;
sh_cmt_write_cmcsr(ch, SH_CMT32_CMCSR_CMM |
SH_CMT32_CMCSR_CMTOUT_IE |
SH_CMT32_CMCSR_CMR_IRQ |
@@ -572,7 +570,7 @@ static int sh_cmt_start(struct sh_cmt_channel *ch, unsigned long flag)
raw_spin_lock_irqsave(&ch->lock, flags);

if (!(ch->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE)))
- ret = sh_cmt_enable(ch, &ch->rate);
+ ret = sh_cmt_enable(ch);

if (ret)
goto out;
@@ -640,10 +638,9 @@ static int sh_cmt_clocksource_enable(struct clocksource *cs)
ch->total_cycles = 0;

ret = sh_cmt_start(ch, FLAG_CLOCKSOURCE);
- if (!ret) {
- __clocksource_update_freq_hz(cs, ch->rate);
+ if (!ret)
ch->cs_enabled = true;
- }
+
return ret;
}

@@ -697,8 +694,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_channel *ch,
dev_info(&ch->cmt->pdev->dev, "ch%u: used as clock source\n",
ch->index);

- /* Register with dummy 1 Hz value, gets updated in ->enable() */
- clocksource_register_hz(cs, 1);
+ clocksource_register_hz(cs, ch->cmt->rate);
return 0;
}

@@ -709,19 +705,10 @@ static struct sh_cmt_channel *ced_to_sh_cmt(struct clock_event_device *ced)

static void sh_cmt_clock_event_start(struct sh_cmt_channel *ch, int periodic)
{
- struct clock_event_device *ced = &ch->ced;
-
sh_cmt_start(ch, FLAG_CLOCKEVENT);

- /* TODO: calculate good shift from rate and counter bit width */
-
- ced->shift = 32;
- ced->mult = div_sc(ch->rate, NSEC_PER_SEC, ced->shift);
- ced->max_delta_ns = clockevent_delta2ns(ch->max_match_value, ced);
- ced->min_delta_ns = clockevent_delta2ns(0x1f, ced);
-
if (periodic)
- sh_cmt_set_next(ch, ((ch->rate + HZ/2) / HZ) - 1);
+ sh_cmt_set_next(ch, ((ch->cmt->rate + HZ/2) / HZ) - 1);
else
sh_cmt_set_next(ch, ch->max_match_value);
}
@@ -824,6 +811,14 @@ static int sh_cmt_register_clockevent(struct sh_cmt_channel *ch,
ced->suspend = sh_cmt_clock_event_suspend;
ced->resume = sh_cmt_clock_event_resume;

+ /* TODO: calculate good shift from rate and counter bit width */
+ ced->shift = 32;
+ ced->mult = div_sc(ch->cmt->rate, NSEC_PER_SEC, ced->shift);
+ ced->max_delta_ns = clockevent_delta2ns(ch->max_match_value, ced);
+ ced->max_delta_ticks = ch->max_match_value;
+ ced->min_delta_ns = clockevent_delta2ns(0x1f, ced);
+ ced->min_delta_ticks = 0x1f;
+
dev_info(&ch->cmt->pdev->dev, "ch%u: used for clock events\n",
ch->index);
clockevents_register_device(ced);
@@ -996,6 +991,18 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
if (ret < 0)
goto err_clk_put;

+ /* Determine clock rate. */
+ ret = clk_enable(cmt->clk);
+ if (ret < 0)
+ goto err_clk_unprepare;
+
+ if (cmt->info->width == 16)
+ cmt->rate = clk_get_rate(cmt->clk) / 512;
+ else
+ cmt->rate = clk_get_rate(cmt->clk) / 8;
+
+ clk_disable(cmt->clk);
+
/* Map the memory resource(s). */
ret = sh_cmt_map_memory(cmt);
if (ret < 0)
diff --git a/drivers/clocksource/sh_tmu.c b/drivers/clocksource/sh_tmu.c
index 1fbf2aadcfd4..31d881621e41 100644
--- a/drivers/clocksource/sh_tmu.c
+++ b/drivers/clocksource/sh_tmu.c
@@ -46,7 +46,6 @@ struct sh_tmu_channel {
void __iomem *base;
int irq;

- unsigned long rate;
unsigned long periodic;
struct clock_event_device ced;
struct clocksource cs;
@@ -59,6 +58,7 @@ struct sh_tmu_device {

void __iomem *mapbase;
struct clk *clk;
+ unsigned long rate;

enum sh_tmu_model model;

@@ -165,7 +165,6 @@ static int __sh_tmu_enable(struct sh_tmu_channel *ch)
sh_tmu_write(ch, TCNT, 0xffffffff);

/* configure channel to parent clock / 4, irq off */
- ch->rate = clk_get_rate(ch->tmu->clk) / 4;
sh_tmu_write(ch, TCR, TCR_TPSC_CLK4);

/* enable channel */
@@ -271,10 +270,8 @@ static int sh_tmu_clocksource_enable(struct clocksource *cs)
return 0;

ret = sh_tmu_enable(ch);
- if (!ret) {
- __clocksource_update_freq_hz(cs, ch->rate);
+ if (!ret)
ch->cs_enabled = true;
- }

return ret;
}
@@ -334,8 +331,7 @@ static int sh_tmu_register_clocksource(struct sh_tmu_channel *ch,
dev_info(&ch->tmu->pdev->dev, "ch%u: used as clock source\n",
ch->index);

- /* Register with dummy 1 Hz value, gets updated in ->enable() */
- clocksource_register_hz(cs, 1);
+ clocksource_register_hz(cs, ch->tmu->rate);
return 0;
}

@@ -346,14 +342,10 @@ static struct sh_tmu_channel *ced_to_sh_tmu(struct clock_event_device *ced)

static void sh_tmu_clock_event_start(struct sh_tmu_channel *ch, int periodic)
{
- struct clock_event_device *ced = &ch->ced;
-
sh_tmu_enable(ch);

- clockevents_config(ced, ch->rate);
-
if (periodic) {
- ch->periodic = (ch->rate + HZ/2) / HZ;
+ ch->periodic = (ch->tmu->rate + HZ/2) / HZ;
sh_tmu_set_next(ch, ch->periodic, 1);
}
}
@@ -435,7 +427,7 @@ static void sh_tmu_register_clockevent(struct sh_tmu_channel *ch,
dev_info(&ch->tmu->pdev->dev, "ch%u: used for clock events\n",
ch->index);

- clockevents_config_and_register(ced, 1, 0x300, 0xffffffff);
+ clockevents_config_and_register(ced, ch->tmu->rate, 0x300, 0xffffffff);

ret = request_irq(ch->irq, sh_tmu_interrupt,
IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,
@@ -561,6 +553,14 @@ static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev)
if (ret < 0)
goto err_clk_put;

+ /* Determine clock rate. */
+ ret = clk_enable(tmu->clk);
+ if (ret < 0)
+ goto err_clk_unprepare;
+
+ tmu->rate = clk_get_rate(tmu->clk) / 4;
+ clk_disable(tmu->clk);
+
/* Map the memory resource. */
ret = sh_tmu_map_memory(tmu);
if (ret < 0) {
diff --git a/drivers/clocksource/sun4i_timer.c b/drivers/clocksource/sun4i_timer.c
index c83452cacb41..4452d5c8f304 100644
--- a/drivers/clocksource/sun4i_timer.c
+++ b/drivers/clocksource/sun4i_timer.c
@@ -159,25 +159,25 @@ static int __init sun4i_timer_init(struct device_node *node)

timer_base = of_iomap(node, 0);
if (!timer_base) {
- pr_crit("Can't map registers");
+ pr_crit("Can't map registers\n");
return -ENXIO;
}

irq = irq_of_parse_and_map(node, 0);
if (irq <= 0) {
- pr_crit("Can't parse IRQ");
+ pr_crit("Can't parse IRQ\n");
return -EINVAL;
}

clk = of_clk_get(node, 0);
if (IS_ERR(clk)) {
- pr_crit("Can't get timer clock");
+ pr_crit("Can't get timer clock\n");
return PTR_ERR(clk);
}

ret = clk_prepare_enable(clk);
if (ret) {
- pr_err("Failed to prepare clock");
+ pr_err("Failed to prepare clock\n");
return ret;
}

@@ -200,7 +200,7 @@ static int __init sun4i_timer_init(struct device_node *node)
ret = clocksource_mmio_init(timer_base + TIMER_CNTVAL_REG(1), node->name,
rate, 350, 32, clocksource_mmio_readl_down);
if (ret) {
- pr_err("Failed to register clocksource");
+ pr_err("Failed to register clocksource\n");
return ret;
}

diff --git a/drivers/clocksource/tegra20_timer.c b/drivers/clocksource/tegra20_timer.c
index f960891aa04e..b9990b9c98c5 100644
--- a/drivers/clocksource/tegra20_timer.c
+++ b/drivers/clocksource/tegra20_timer.c
@@ -245,7 +245,7 @@ static int __init tegra20_init_rtc(struct device_node *np)

rtc_base = of_iomap(np, 0);
if (!rtc_base) {
- pr_err("Can't map RTC registers");
+ pr_err("Can't map RTC registers\n");
return -ENXIO;
}

diff --git a/drivers/clocksource/time-armada-370-xp.c b/drivers/clocksource/time-armada-370-xp.c
index 4440aefc59cd..aea4380129ea 100644
--- a/drivers/clocksource/time-armada-370-xp.c
+++ b/drivers/clocksource/time-armada-370-xp.c
@@ -247,13 +247,13 @@ static int __init armada_370_xp_timer_common_init(struct device_node *np)

timer_base = of_iomap(np, 0);
if (!timer_base) {
- pr_err("Failed to iomap");
+ pr_err("Failed to iomap\n");
return -ENXIO;
}

local_base = of_iomap(np, 1);
if (!local_base) {
- pr_err("Failed to iomap");
+ pr_err("Failed to iomap\n");
return -ENXIO;
}

@@ -298,7 +298,7 @@ static int __init armada_370_xp_timer_common_init(struct device_node *np)
"armada_370_xp_clocksource",
timer_clk, 300, 32, clocksource_mmio_readl_down);
if (res) {
- pr_err("Failed to initialize clocksource mmio");
+ pr_err("Failed to initialize clocksource mmio\n");
return res;
}

@@ -315,7 +315,7 @@ static int __init armada_370_xp_timer_common_init(struct device_node *np)
armada_370_xp_evt);
/* Immediately configure the timer on the boot CPU */
if (res) {
- pr_err("Failed to request percpu irq");
+ pr_err("Failed to request percpu irq\n");
return res;
}

@@ -324,7 +324,7 @@ static int __init armada_370_xp_timer_common_init(struct device_node *np)
armada_370_xp_timer_starting_cpu,
armada_370_xp_timer_dying_cpu);
if (res) {
- pr_err("Failed to setup hotplug state and timer");
+ pr_err("Failed to setup hotplug state and timer\n");
return res;
}

@@ -339,7 +339,7 @@ static int __init armada_xp_timer_init(struct device_node *np)
int ret;

if (IS_ERR(clk)) {
- pr_err("Failed to get clock");
+ pr_err("Failed to get clock\n");
return PTR_ERR(clk);
}

@@ -375,7 +375,7 @@ static int __init armada_375_timer_init(struct device_node *np)

/* Must have at least a clock */
if (IS_ERR(clk)) {
- pr_err("Failed to get clock");
+ pr_err("Failed to get clock\n");
return PTR_ERR(clk);
}

@@ -399,7 +399,7 @@ static int __init armada_370_timer_init(struct device_node *np)

clk = of_clk_get(np, 0);
if (IS_ERR(clk)) {
- pr_err("Failed to get clock");
+ pr_err("Failed to get clock\n");
return PTR_ERR(clk);
}

diff --git a/drivers/clocksource/time-efm32.c b/drivers/clocksource/time-efm32.c
index 5ac344b383e1..ce0f97b4e5db 100644
--- a/drivers/clocksource/time-efm32.c
+++ b/drivers/clocksource/time-efm32.c
@@ -235,7 +235,7 @@ static int __init efm32_clockevent_init(struct device_node *np)

ret = setup_irq(irq, &efm32_clock_event_irq);
if (ret) {
- pr_err("Failed setup irq");
+ pr_err("Failed setup irq\n");
goto err_setup_irq;
}

diff --git a/drivers/clocksource/time-orion.c b/drivers/clocksource/time-orion.c
index a28f496e97cf..b9b97f630c4d 100644
--- a/drivers/clocksource/time-orion.c
+++ b/drivers/clocksource/time-orion.c
@@ -15,6 +15,7 @@
#include <linux/bitops.h>
#include <linux/clk.h>
#include <linux/clockchips.h>
+#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
@@ -36,6 +37,21 @@

static void __iomem *timer_base;

+static unsigned long notrace orion_read_timer(void)
+{
+ return ~readl(timer_base + TIMER0_VAL);
+}
+
+static struct delay_timer orion_delay_timer = {
+ .read_current_timer = orion_read_timer,
+};
+
+static void orion_delay_timer_init(unsigned long rate)
+{
+ orion_delay_timer.freq = rate;
+ register_current_timer_delay(&orion_delay_timer);
+}
+
/*
* Free-running clocksource handling.
*/
@@ -106,6 +122,7 @@ static struct irqaction orion_clkevt_irq = {

static int __init orion_timer_init(struct device_node *np)
{
+ unsigned long rate;
struct clk *clk;
int irq, ret;

@@ -124,7 +141,7 @@ static int __init orion_timer_init(struct device_node *np)

ret = clk_prepare_enable(clk);
if (ret) {
- pr_err("Failed to prepare clock");
+ pr_err("Failed to prepare clock\n");
return ret;
}

@@ -135,6 +152,8 @@ static int __init orion_timer_init(struct device_node *np)
return -EINVAL;
}

+ rate = clk_get_rate(clk);
+
/* setup timer0 as free-running clocksource */
writel(~0, timer_base + TIMER0_VAL);
writel(~0, timer_base + TIMER0_RELOAD);
@@ -142,15 +161,15 @@ static int __init orion_timer_init(struct device_node *np)
TIMER0_RELOAD_EN | TIMER0_EN,
TIMER0_RELOAD_EN | TIMER0_EN);

- ret = clocksource_mmio_init(timer_base + TIMER0_VAL, "orion_clocksource",
- clk_get_rate(clk), 300, 32,
+ ret = clocksource_mmio_init(timer_base + TIMER0_VAL,
+ "orion_clocksource", rate, 300, 32,
clocksource_mmio_readl_down);
if (ret) {
- pr_err("Failed to initialize mmio timer");
+ pr_err("Failed to initialize mmio timer\n");
return ret;
}

- sched_clock_register(orion_read_sched_clock, 32, clk_get_rate(clk));
+ sched_clock_register(orion_read_sched_clock, 32, rate);

/* setup timer1 as clockevent timer */
ret = setup_irq(irq, &orion_clkevt_irq);
@@ -162,9 +181,12 @@ static int __init orion_timer_init(struct device_node *np)
ticks_per_jiffy = (clk_get_rate(clk) + HZ/2) / HZ;
orion_clkevt.cpumask = cpumask_of(0);
orion_clkevt.irq = irq;
- clockevents_config_and_register(&orion_clkevt, clk_get_rate(clk),
+ clockevents_config_and_register(&orion_clkevt, rate,
ORION_ONESHOT_MIN, ORION_ONESHOT_MAX);

+
+ orion_delay_timer_init(rate);
+
return 0;
}
CLOCKSOURCE_OF_DECLARE(orion_timer, "marvell,orion-timer", orion_timer_init);
diff --git a/drivers/clocksource/timer-atlas7.c b/drivers/clocksource/timer-atlas7.c
index 3d8a181f0252..50300eec4a39 100644
--- a/drivers/clocksource/timer-atlas7.c
+++ b/drivers/clocksource/timer-atlas7.c
@@ -192,7 +192,9 @@ static int sirfsoc_local_timer_starting_cpu(unsigned int cpu)
ce->set_next_event = sirfsoc_timer_set_next_event;
clockevents_calc_mult_shift(ce, atlas7_timer_rate, 60);
ce->max_delta_ns = clockevent_delta2ns(-2, ce);
+ ce->max_delta_ticks = (unsigned long)-2;
ce->min_delta_ns = clockevent_delta2ns(2, ce);
+ ce->min_delta_ticks = 2;
ce->cpumask = cpumask_of(cpu);

action->dev_id = ce;
diff --git a/drivers/clocksource/timer-atmel-pit.c b/drivers/clocksource/timer-atmel-pit.c
index c0b5df3167a0..cc112351dc70 100644
--- a/drivers/clocksource/timer-atmel-pit.c
+++ b/drivers/clocksource/timer-atmel-pit.c
@@ -226,7 +226,7 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)

ret = clocksource_register_hz(&data->clksrc, pit_rate);
if (ret) {
- pr_err("Failed to register clocksource");
+ pr_err("Failed to register clocksource\n");
return ret;
}

diff --git a/drivers/clocksource/timer-digicolor.c b/drivers/clocksource/timer-digicolor.c
index e9f50d289362..94a161eb9cce 100644
--- a/drivers/clocksource/timer-digicolor.c
+++ b/drivers/clocksource/timer-digicolor.c
@@ -161,19 +161,19 @@ static int __init digicolor_timer_init(struct device_node *node)
*/
dc_timer_dev.base = of_iomap(node, 0);
if (!dc_timer_dev.base) {
- pr_err("Can't map registers");
+ pr_err("Can't map registers\n");
return -ENXIO;
}

irq = irq_of_parse_and_map(node, dc_timer_dev.timer_id);
if (irq <= 0) {
- pr_err("Can't parse IRQ");
+ pr_err("Can't parse IRQ\n");
return -EINVAL;
}

clk = of_clk_get(node, 0);
if (IS_ERR(clk)) {
- pr_err("Can't get timer clock");
+ pr_err("Can't get timer clock\n");
return PTR_ERR(clk);
}
clk_prepare_enable(clk);
diff --git a/drivers/clocksource/timer-gemini.c b/drivers/clocksource/timer-fttmr010.c
similarity index 72%
rename from drivers/clocksource/timer-gemini.c
rename to drivers/clocksource/timer-fttmr010.c
index dda27b7bf1a1..b4a6f1e4bc54 100644
--- a/drivers/clocksource/timer-gemini.c
+++ b/drivers/clocksource/timer-fttmr010.c
@@ -1,5 +1,5 @@
/*
- * Gemini timer driver
+ * Faraday Technology FTTMR010 timer driver
* Copyright (C) 2017 Linus Walleij <linus.walleij@xxxxxxxxxx>
*
* Based on a rewrite of arch/arm/mach-gemini/timer.c:
@@ -16,17 +16,7 @@
#include <linux/clockchips.h>
#include <linux/clocksource.h>
#include <linux/sched_clock.h>
-
-/*
- * Relevant registers in the global syscon
- */
-#define GLOBAL_STATUS 0x04
-#define CPU_AHB_RATIO_MASK (0x3 << 18)
-#define CPU_AHB_1_1 (0x0 << 18)
-#define CPU_AHB_3_2 (0x1 << 18)
-#define CPU_AHB_24_13 (0x2 << 18)
-#define CPU_AHB_2_1 (0x3 << 18)
-#define REG_TO_AHB_SPEED(reg) ((((reg) >> 15) & 0x7) * 10 + 130)
+#include <linux/clk.h>

/*
* Register definitions for the timers
@@ -77,12 +67,12 @@
static unsigned int tick_rate;
static void __iomem *base;

-static u64 notrace gemini_read_sched_clock(void)
+static u64 notrace fttmr010_read_sched_clock(void)
{
return readl(base + TIMER3_COUNT);
}

-static int gemini_timer_set_next_event(unsigned long cycles,
+static int fttmr010_timer_set_next_event(unsigned long cycles,
struct clock_event_device *evt)
{
u32 cr;
@@ -96,7 +86,7 @@ static int gemini_timer_set_next_event(unsigned long cycles,
return 0;
}

-static int gemini_timer_shutdown(struct clock_event_device *evt)
+static int fttmr010_timer_shutdown(struct clock_event_device *evt)
{
u32 cr;

@@ -127,7 +117,7 @@ static int gemini_timer_shutdown(struct clock_event_device *evt)
return 0;
}

-static int gemini_timer_set_periodic(struct clock_event_device *evt)
+static int fttmr010_timer_set_periodic(struct clock_event_device *evt)
{
u32 period = DIV_ROUND_CLOSEST(tick_rate, HZ);
u32 cr;
@@ -158,54 +148,40 @@ static int gemini_timer_set_periodic(struct clock_event_device *evt)
}

/* Use TIMER1 as clock event */
-static struct clock_event_device gemini_clockevent = {
+static struct clock_event_device fttmr010_clockevent = {
.name = "TIMER1",
/* Reasonably fast and accurate clock event */
.rating = 300,
.shift = 32,
.features = CLOCK_EVT_FEAT_PERIODIC |
CLOCK_EVT_FEAT_ONESHOT,
- .set_next_event = gemini_timer_set_next_event,
- .set_state_shutdown = gemini_timer_shutdown,
- .set_state_periodic = gemini_timer_set_periodic,
- .set_state_oneshot = gemini_timer_shutdown,
- .tick_resume = gemini_timer_shutdown,
+ .set_next_event = fttmr010_timer_set_next_event,
+ .set_state_shutdown = fttmr010_timer_shutdown,
+ .set_state_periodic = fttmr010_timer_set_periodic,
+ .set_state_oneshot = fttmr010_timer_shutdown,
+ .tick_resume = fttmr010_timer_shutdown,
};

/*
* IRQ handler for the timer
*/
-static irqreturn_t gemini_timer_interrupt(int irq, void *dev_id)
+static irqreturn_t fttmr010_timer_interrupt(int irq, void *dev_id)
{
- struct clock_event_device *evt = &gemini_clockevent;
+ struct clock_event_device *evt = &fttmr010_clockevent;

evt->event_handler(evt);
return IRQ_HANDLED;
}

-static struct irqaction gemini_timer_irq = {
- .name = "Gemini Timer Tick",
+static struct irqaction fttmr010_timer_irq = {
+ .name = "Faraday FTTMR010 Timer Tick",
.flags = IRQF_TIMER,
- .handler = gemini_timer_interrupt,
+ .handler = fttmr010_timer_interrupt,
};

-static int __init gemini_timer_of_init(struct device_node *np)
+static int __init fttmr010_timer_common_init(struct device_node *np)
{
- static struct regmap *map;
int irq;
- int ret;
- u32 val;
-
- map = syscon_regmap_lookup_by_phandle(np, "syscon");
- if (IS_ERR(map)) {
- pr_err("Can't get regmap for syscon handle");
- return -ENODEV;
- }
- ret = regmap_read(map, GLOBAL_STATUS, &val);
- if (ret) {
- pr_err("Can't read syscon status register");
- return -ENXIO;
- }

base = of_iomap(np, 0);
if (!base) {
@@ -219,26 +195,6 @@ static int __init gemini_timer_of_init(struct device_node *np)
return -EINVAL;
}

- tick_rate = REG_TO_AHB_SPEED(val) * 1000000;
- printk(KERN_INFO "Bus: %dMHz", tick_rate / 1000000);
-
- tick_rate /= 6; /* APB bus run AHB*(1/6) */
-
- switch (val & CPU_AHB_RATIO_MASK) {
- case CPU_AHB_1_1:
- printk(KERN_CONT "(1/1)\n");
- break;
- case CPU_AHB_3_2:
- printk(KERN_CONT "(3/2)\n");
- break;
- case CPU_AHB_24_13:
- printk(KERN_CONT "(24/13)\n");
- break;
- case CPU_AHB_2_1:
- printk(KERN_CONT "(2/1)\n");
- break;
- }
-
/*
* Reset the interrupt mask and status
*/
@@ -255,9 +211,9 @@ static int __init gemini_timer_of_init(struct device_node *np)
writel(0, base + TIMER3_MATCH1);
writel(0, base + TIMER3_MATCH2);
clocksource_mmio_init(base + TIMER3_COUNT,
- "gemini_clocksource", tick_rate,
+ "fttmr010_clocksource", tick_rate,
300, 32, clocksource_mmio_readl_up);
- sched_clock_register(gemini_read_sched_clock, 32, tick_rate);
+ sched_clock_register(fttmr010_read_sched_clock, 32, tick_rate);

/*
* Setup clockevent timer (interrupt-driven.)
@@ -266,12 +222,82 @@ static int __init gemini_timer_of_init(struct device_node *np)
writel(0, base + TIMER1_LOAD);
writel(0, base + TIMER1_MATCH1);
writel(0, base + TIMER1_MATCH2);
- setup_irq(irq, &gemini_timer_irq);
- gemini_clockevent.cpumask = cpumask_of(0);
- clockevents_config_and_register(&gemini_clockevent, tick_rate,
+ setup_irq(irq, &fttmr010_timer_irq);
+ fttmr010_clockevent.cpumask = cpumask_of(0);
+ clockevents_config_and_register(&fttmr010_clockevent, tick_rate,
1, 0xffffffff);

return 0;
}
-CLOCKSOURCE_OF_DECLARE(nomadik_mtu, "cortina,gemini-timer",
- gemini_timer_of_init);
+
+static int __init fttmr010_timer_of_init(struct device_node *np)
+{
+ /*
+ * These implementations require a clock reference.
+ * FIXME: we currently only support clocking using PCLK
+ * and using EXTCLK is not supported in the driver.
+ */
+ struct clk *clk;
+
+ clk = of_clk_get_by_name(np, "PCLK");
+ if (IS_ERR(clk)) {
+ pr_err("could not get PCLK");
+ return PTR_ERR(clk);
+ }
+ tick_rate = clk_get_rate(clk);
+
+ return fttmr010_timer_common_init(np);
+}
+CLOCKSOURCE_OF_DECLARE(fttmr010, "faraday,fttmr010", fttmr010_timer_of_init);
+
+/*
+ * Gemini-specific: relevant registers in the global syscon
+ */
+#define GLOBAL_STATUS 0x04
+#define CPU_AHB_RATIO_MASK (0x3 << 18)
+#define CPU_AHB_1_1 (0x0 << 18)
+#define CPU_AHB_3_2 (0x1 << 18)
+#define CPU_AHB_24_13 (0x2 << 18)
+#define CPU_AHB_2_1 (0x3 << 18)
+#define REG_TO_AHB_SPEED(reg) ((((reg) >> 15) & 0x7) * 10 + 130)
+
+static int __init gemini_timer_of_init(struct device_node *np)
+{
+ static struct regmap *map;
+ int ret;
+ u32 val;
+
+ map = syscon_regmap_lookup_by_phandle(np, "syscon");
+ if (IS_ERR(map)) {
+ pr_err("Can't get regmap for syscon handle\n");
+ return -ENODEV;
+ }
+ ret = regmap_read(map, GLOBAL_STATUS, &val);
+ if (ret) {
+ pr_err("Can't read syscon status register\n");
+ return -ENXIO;
+ }
+
+ tick_rate = REG_TO_AHB_SPEED(val) * 1000000;
+ pr_info("Bus: %dMHz ", tick_rate / 1000000);
+
+ tick_rate /= 6; /* APB bus run AHB*(1/6) */
+
+ switch (val & CPU_AHB_RATIO_MASK) {
+ case CPU_AHB_1_1:
+ pr_cont("(1/1)\n");
+ break;
+ case CPU_AHB_3_2:
+ pr_cont("(3/2)\n");
+ break;
+ case CPU_AHB_24_13:
+ pr_cont("(24/13)\n");
+ break;
+ case CPU_AHB_2_1:
+ pr_cont("(2/1)\n");
+ break;
+ }
+
+ return fttmr010_timer_common_init(np);
+}
+CLOCKSOURCE_OF_DECLARE(gemini, "cortina,gemini-timer", gemini_timer_of_init);
diff --git a/drivers/clocksource/timer-integrator-ap.c b/drivers/clocksource/timer-integrator-ap.c
index df6e672afc04..04ad3066e190 100644
--- a/drivers/clocksource/timer-integrator-ap.c
+++ b/drivers/clocksource/timer-integrator-ap.c
@@ -200,7 +200,7 @@ static int __init integrator_ap_timer_init_of(struct device_node *node)
err = of_property_read_string(of_aliases,
"arm,timer-primary", &path);
if (err) {
- pr_warn("Failed to read property");
+ pr_warn("Failed to read property\n");
return err;
}

@@ -209,7 +209,7 @@ static int __init integrator_ap_timer_init_of(struct device_node *node)
err = of_property_read_string(of_aliases,
"arm,timer-secondary", &path);
if (err) {
- pr_warn("Failed to read property");
+ pr_warn("Failed to read property\n");
return err;
}

diff --git a/drivers/clocksource/timer-nps.c b/drivers/clocksource/timer-nps.c
index da1f7986e477..e74ea1722ad3 100644
--- a/drivers/clocksource/timer-nps.c
+++ b/drivers/clocksource/timer-nps.c
@@ -55,7 +55,7 @@ static int __init nps_get_timer_clk(struct device_node *node,
*clk = of_clk_get(node, 0);
ret = PTR_ERR_OR_ZERO(*clk);
if (ret) {
- pr_err("timer missing clk");
+ pr_err("timer missing clk\n");
return ret;
}

@@ -247,7 +247,7 @@ static int __init nps_setup_clockevent(struct device_node *node)

nps_timer0_irq = irq_of_parse_and_map(node, 0);
if (nps_timer0_irq <= 0) {
- pr_err("clockevent: missing irq");
+ pr_err("clockevent: missing irq\n");
return -EINVAL;
}

@@ -270,7 +270,7 @@ static int __init nps_setup_clockevent(struct device_node *node)
nps_timer_starting_cpu,
nps_timer_dying_cpu);
if (ret) {
- pr_err("Failed to setup hotplug state");
+ pr_err("Failed to setup hotplug state\n");
clk_disable_unprepare(clk);
free_percpu_irq(nps_timer0_irq, &nps_clockevent_device);
return ret;
diff --git a/drivers/clocksource/timer-prima2.c b/drivers/clocksource/timer-prima2.c
index bfa981ac1eaf..b4122ed1accb 100644
--- a/drivers/clocksource/timer-prima2.c
+++ b/drivers/clocksource/timer-prima2.c
@@ -196,20 +196,20 @@ static int __init sirfsoc_prima2_timer_init(struct device_node *np)

clk = of_clk_get(np, 0);
if (IS_ERR(clk)) {
- pr_err("Failed to get clock");
+ pr_err("Failed to get clock\n");
return PTR_ERR(clk);
}

ret = clk_prepare_enable(clk);
if (ret) {
- pr_err("Failed to enable clock");
+ pr_err("Failed to enable clock\n");
return ret;
}

rate = clk_get_rate(clk);

if (rate < PRIMA2_CLOCK_FREQ || rate % PRIMA2_CLOCK_FREQ) {
- pr_err("Invalid clock rate");
+ pr_err("Invalid clock rate\n");
return -EINVAL;
}

@@ -229,7 +229,7 @@ static int __init sirfsoc_prima2_timer_init(struct device_node *np)

ret = clocksource_register_hz(&sirfsoc_clocksource, PRIMA2_CLOCK_FREQ);
if (ret) {
- pr_err("Failed to register clocksource");
+ pr_err("Failed to register clocksource\n");
return ret;
}

@@ -237,7 +237,7 @@ static int __init sirfsoc_prima2_timer_init(struct device_node *np)

ret = setup_irq(sirfsoc_timer_irq.irq, &sirfsoc_timer_irq);
if (ret) {
- pr_err("Failed to setup irq");
+ pr_err("Failed to setup irq\n");
return ret;
}

diff --git a/drivers/clocksource/timer-sp804.c b/drivers/clocksource/timer-sp804.c
index d07863388e05..2d575a8c0939 100644
--- a/drivers/clocksource/timer-sp804.c
+++ b/drivers/clocksource/timer-sp804.c
@@ -299,13 +299,13 @@ static int __init integrator_cp_of_init(struct device_node *np)

base = of_iomap(np, 0);
if (!base) {
- pr_err("Failed to iomap");
+ pr_err("Failed to iomap\n");
return -ENXIO;
}

clk = of_clk_get(np, 0);
if (IS_ERR(clk)) {
- pr_err("Failed to get clock");
+ pr_err("Failed to get clock\n");
return PTR_ERR(clk);
}

diff --git a/drivers/clocksource/timer-sun5i.c b/drivers/clocksource/timer-sun5i.c
index a3e662b15964..2e9c830ae1cd 100644
--- a/drivers/clocksource/timer-sun5i.c
+++ b/drivers/clocksource/timer-sun5i.c
@@ -332,19 +332,19 @@ static int __init sun5i_timer_init(struct device_node *node)

timer_base = of_io_request_and_map(node, 0, of_node_full_name(node));
if (IS_ERR(timer_base)) {
- pr_err("Can't map registers");
+ pr_err("Can't map registers\n");
return PTR_ERR(timer_base);;
}

irq = irq_of_parse_and_map(node, 0);
if (irq <= 0) {
- pr_err("Can't parse IRQ");
+ pr_err("Can't parse IRQ\n");
return -EINVAL;
}

clk = of_clk_get(node, 0);
if (IS_ERR(clk)) {
- pr_err("Can't get timer clock");
+ pr_err("Can't get timer clock\n");
return PTR_ERR(clk);
}

diff --git a/drivers/clocksource/vf_pit_timer.c b/drivers/clocksource/vf_pit_timer.c
index 55d8d8402d90..e0849e20a307 100644
--- a/drivers/clocksource/vf_pit_timer.c
+++ b/drivers/clocksource/vf_pit_timer.c
@@ -165,7 +165,7 @@ static int __init pit_timer_init(struct device_node *np)

timer_base = of_iomap(np, 0);
if (!timer_base) {
- pr_err("Failed to iomap");
+ pr_err("Failed to iomap\n");
return -ENXIO;
}

diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
index e8142803a1a7..b77435783ef3 100644
--- a/drivers/ptp/ptp_clock.c
+++ b/drivers/ptp/ptp_clock.c
@@ -97,30 +97,26 @@ static s32 scaled_ppm_to_ppb(long ppm)

/* posix clock implementation */

-static int ptp_clock_getres(struct posix_clock *pc, struct timespec *tp)
+static int ptp_clock_getres(struct posix_clock *pc, struct timespec64 *tp)
{
tp->tv_sec = 0;
tp->tv_nsec = 1;
return 0;
}

-static int ptp_clock_settime(struct posix_clock *pc, const struct timespec *tp)
+static int ptp_clock_settime(struct posix_clock *pc, const struct timespec64 *tp)
{
struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
- struct timespec64 ts = timespec_to_timespec64(*tp);

- return ptp->info->settime64(ptp->info, &ts);
+ return ptp->info->settime64(ptp->info, tp);
}

-static int ptp_clock_gettime(struct posix_clock *pc, struct timespec *tp)
+static int ptp_clock_gettime(struct posix_clock *pc, struct timespec64 *tp)
{
struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
- struct timespec64 ts;
int err;

- err = ptp->info->gettime64(ptp->info, &ts);
- if (!err)
- *tp = timespec64_to_timespec(ts);
+ err = ptp->info->gettime64(ptp->info, tp);
return err;
}

@@ -133,7 +129,7 @@ static int ptp_clock_adjtime(struct posix_clock *pc, struct timex *tx)
ops = ptp->info;

if (tx->modes & ADJ_SETOFFSET) {
- struct timespec ts;
+ struct timespec64 ts;
ktime_t kt;
s64 delta;

@@ -146,7 +142,7 @@ static int ptp_clock_adjtime(struct posix_clock *pc, struct timex *tx)
if ((unsigned long) ts.tv_nsec >= NSEC_PER_SEC)
return -EINVAL;

- kt = timespec_to_ktime(ts);
+ kt = timespec64_to_ktime(ts);
delta = ktime_to_ns(kt);
err = ops->adjtime(ops, delta);
} else if (tx->modes & ADJ_FREQUENCY) {
diff --git a/include/clocksource/arm_arch_timer.h b/include/clocksource/arm_arch_timer.h
index caedb74c9210..cc805b72994a 100644
--- a/include/clocksource/arm_arch_timer.h
+++ b/include/clocksource/arm_arch_timer.h
@@ -16,9 +16,13 @@
#ifndef __CLKSOURCE_ARM_ARCH_TIMER_H
#define __CLKSOURCE_ARM_ARCH_TIMER_H

+#include <linux/bitops.h>
#include <linux/timecounter.h>
#include <linux/types.h>

+#define ARCH_TIMER_TYPE_CP15 BIT(0)
+#define ARCH_TIMER_TYPE_MEM BIT(1)
+
#define ARCH_TIMER_CTRL_ENABLE (1 << 0)
#define ARCH_TIMER_CTRL_IT_MASK (1 << 1)
#define ARCH_TIMER_CTRL_IT_STAT (1 << 2)
@@ -34,11 +38,27 @@ enum arch_timer_reg {
ARCH_TIMER_REG_TVAL,
};

+enum arch_timer_ppi_nr {
+ ARCH_TIMER_PHYS_SECURE_PPI,
+ ARCH_TIMER_PHYS_NONSECURE_PPI,
+ ARCH_TIMER_VIRT_PPI,
+ ARCH_TIMER_HYP_PPI,
+ ARCH_TIMER_MAX_TIMER_PPI
+};
+
+enum arch_timer_spi_nr {
+ ARCH_TIMER_PHYS_SPI,
+ ARCH_TIMER_VIRT_SPI,
+ ARCH_TIMER_MAX_TIMER_SPI
+};
+
#define ARCH_TIMER_PHYS_ACCESS 0
#define ARCH_TIMER_VIRT_ACCESS 1
#define ARCH_TIMER_MEM_PHYS_ACCESS 2
#define ARCH_TIMER_MEM_VIRT_ACCESS 3

+#define ARCH_TIMER_MEM_MAX_FRAMES 8
+
#define ARCH_TIMER_USR_PCT_ACCESS_EN (1 << 0) /* physical counter */
#define ARCH_TIMER_USR_VCT_ACCESS_EN (1 << 1) /* virtual counter */
#define ARCH_TIMER_VIRT_EVT_EN (1 << 2)
@@ -54,6 +74,20 @@ struct arch_timer_kvm_info {
int virtual_irq;
};

+struct arch_timer_mem_frame {
+ bool valid;
+ phys_addr_t cntbase;
+ size_t size;
+ int phys_irq;
+ int virt_irq;
+};
+
+struct arch_timer_mem {
+ phys_addr_t cntctlbase;
+ size_t size;
+ struct arch_timer_mem_frame frame[ARCH_TIMER_MEM_MAX_FRAMES];
+};
+
#ifdef CONFIG_ARM_ARCH_TIMER

extern u32 arch_timer_get_rate(void);
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 9b05886f9773..31937249f8cc 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -595,6 +595,13 @@ enum acpi_reconfig_event {
int acpi_reconfig_notifier_register(struct notifier_block *nb);
int acpi_reconfig_notifier_unregister(struct notifier_block *nb);

+#ifdef CONFIG_ACPI_GTDT
+int acpi_gtdt_init(struct acpi_table_header *table, int *platform_timer_count);
+int acpi_gtdt_map_ppi(int type);
+bool acpi_gtdt_c3stop(int type);
+int acpi_arch_timer_mem_init(struct arch_timer_mem *timer_mem, int *timer_count);
+#endif
+
#else /* !CONFIG_ACPI */

#define acpi_disabled 1
diff --git a/include/linux/clockchips.h b/include/linux/clockchips.h
index 5d3053c34fb3..eef1569e5cd0 100644
--- a/include/linux/clockchips.h
+++ b/include/linux/clockchips.h
@@ -182,7 +182,6 @@ extern u64 clockevent_delta2ns(unsigned long latch, struct clock_event_device *e
extern void clockevents_register_device(struct clock_event_device *dev);
extern int clockevents_unbind_device(struct clock_event_device *ced, int cpu);

-extern void clockevents_config(struct clock_event_device *dev, u32 freq);
extern void clockevents_config_and_register(struct clock_event_device *dev,
u32 freq, unsigned long min_delta,
unsigned long max_delta);
diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
index cfc75848a35d..f2b10d9ebd04 100644
--- a/include/linux/clocksource.h
+++ b/include/linux/clocksource.h
@@ -120,7 +120,7 @@ struct clocksource {
#define CLOCK_SOURCE_RESELECT 0x100

/* simplify initialization of mask field */
-#define CLOCKSOURCE_MASK(bits) (u64)((bits) < 64 ? ((1ULL<<(bits))-1) : -1)
+#define CLOCKSOURCE_MASK(bits) GENMASK_ULL((bits) - 1, 0)

static inline u32 clocksource_freq2mult(u32 freq, u32 shift_constant, u64 from)
{
diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
index 249e579ecd4c..8c5b10eb7265 100644
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -276,8 +276,6 @@ static inline int hrtimer_is_hres_active(struct hrtimer *timer)
return timer->base->cpu_base->hres_active;
}

-extern void hrtimer_peek_ahead_timers(void);
-
/*
* The resolution of the clocks. The resolution value is returned in
* the clock_getres() system call to give application programmers an
@@ -300,8 +298,6 @@ extern unsigned int hrtimer_resolution;

#define hrtimer_resolution (unsigned int)LOW_RES_NSEC

-static inline void hrtimer_peek_ahead_timers(void) { }
-
static inline int hrtimer_is_hres_active(struct hrtimer *timer)
{
return 0;
@@ -456,7 +452,7 @@ static inline u64 hrtimer_forward_now(struct hrtimer *timer,
}

/* Precise sleep: */
-extern long hrtimer_nanosleep(struct timespec *rqtp,
+extern long hrtimer_nanosleep(struct timespec64 *rqtp,
struct timespec __user *rmtp,
const enum hrtimer_mode mode,
const clockid_t clockid);
diff --git a/include/linux/irqchip/mips-gic.h b/include/linux/irqchip/mips-gic.h
index 7b49c71c968b..2b0e56619e53 100644
--- a/include/linux/irqchip/mips-gic.h
+++ b/include/linux/irqchip/mips-gic.h
@@ -258,7 +258,6 @@ extern unsigned int gic_present;
extern void gic_init(unsigned long gic_base_addr,
unsigned long gic_addrspace_size, unsigned int cpu_vec,
unsigned int irqbase);
-extern void gic_clocksource_init(unsigned int);
extern u64 gic_read_count(void);
extern unsigned int gic_get_count_width(void);
extern u64 gic_read_compare(void);
diff --git a/include/linux/posix-clock.h b/include/linux/posix-clock.h
index 34c4498b800f..83b22ae9ae12 100644
--- a/include/linux/posix-clock.h
+++ b/include/linux/posix-clock.h
@@ -59,23 +59,23 @@ struct posix_clock_operations {

int (*clock_adjtime)(struct posix_clock *pc, struct timex *tx);

- int (*clock_gettime)(struct posix_clock *pc, struct timespec *ts);
+ int (*clock_gettime)(struct posix_clock *pc, struct timespec64 *ts);

- int (*clock_getres) (struct posix_clock *pc, struct timespec *ts);
+ int (*clock_getres) (struct posix_clock *pc, struct timespec64 *ts);

int (*clock_settime)(struct posix_clock *pc,
- const struct timespec *ts);
+ const struct timespec64 *ts);

int (*timer_create) (struct posix_clock *pc, struct k_itimer *kit);

int (*timer_delete) (struct posix_clock *pc, struct k_itimer *kit);

void (*timer_gettime)(struct posix_clock *pc,
- struct k_itimer *kit, struct itimerspec *tsp);
+ struct k_itimer *kit, struct itimerspec64 *tsp);

int (*timer_settime)(struct posix_clock *pc,
struct k_itimer *kit, int flags,
- struct itimerspec *tsp, struct itimerspec *old);
+ struct itimerspec64 *tsp, struct itimerspec64 *old);
/*
* Optional character device methods:
*/
diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
index 64aa189efe21..8c1e43ab14a9 100644
--- a/include/linux/posix-timers.h
+++ b/include/linux/posix-timers.h
@@ -87,22 +87,22 @@ struct k_itimer {
};

struct k_clock {
- int (*clock_getres) (const clockid_t which_clock, struct timespec *tp);
+ int (*clock_getres) (const clockid_t which_clock, struct timespec64 *tp);
int (*clock_set) (const clockid_t which_clock,
- const struct timespec *tp);
- int (*clock_get) (const clockid_t which_clock, struct timespec * tp);
+ const struct timespec64 *tp);
+ int (*clock_get) (const clockid_t which_clock, struct timespec64 *tp);
int (*clock_adj) (const clockid_t which_clock, struct timex *tx);
int (*timer_create) (struct k_itimer *timer);
int (*nsleep) (const clockid_t which_clock, int flags,
- struct timespec *, struct timespec __user *);
+ struct timespec64 *, struct timespec __user *);
long (*nsleep_restart) (struct restart_block *restart_block);
- int (*timer_set) (struct k_itimer * timr, int flags,
- struct itimerspec * new_setting,
- struct itimerspec * old_setting);
- int (*timer_del) (struct k_itimer * timr);
+ int (*timer_set) (struct k_itimer *timr, int flags,
+ struct itimerspec64 *new_setting,
+ struct itimerspec64 *old_setting);
+ int (*timer_del) (struct k_itimer *timr);
#define TIMER_RETRY 1
- void (*timer_get) (struct k_itimer * timr,
- struct itimerspec * cur_setting);
+ void (*timer_get) (struct k_itimer *timr,
+ struct itimerspec64 *cur_setting);
};

extern struct k_clock clock_posix_cpu;
diff --git a/include/linux/timekeeping.h b/include/linux/timekeeping.h
index b598cbc7b576..ddc229ff6d1e 100644
--- a/include/linux/timekeeping.h
+++ b/include/linux/timekeeping.h
@@ -19,21 +19,6 @@ extern void do_gettimeofday(struct timeval *tv);
extern int do_settimeofday64(const struct timespec64 *ts);
extern int do_sys_settimeofday64(const struct timespec64 *tv,
const struct timezone *tz);
-static inline int do_sys_settimeofday(const struct timespec *tv,
- const struct timezone *tz)
-{
- struct timespec64 ts64;
-
- if (!tv)
- return do_sys_settimeofday64(NULL, tz);
-
- if (!timespec_valid(tv))
- return -EINVAL;
-
- ts64 = timespec_to_timespec64(*tv);
- return do_sys_settimeofday64(&ts64, tz);
-}
-
/*
* Kernel time accessors
*/
@@ -273,6 +258,11 @@ static inline void timekeeping_clocktai(struct timespec *ts)
*ts = ktime_to_timespec(ktime_get_clocktai());
}

+static inline void timekeeping_clocktai64(struct timespec64 *ts)
+{
+ *ts = ktime_to_timespec64(ktime_get_clocktai());
+}
+
/*
* RTC specific
*/
diff --git a/kernel/compat.c b/kernel/compat.c
index 19aec5d98108..933bcb31ae10 100644
--- a/kernel/compat.c
+++ b/kernel/compat.c
@@ -108,8 +108,8 @@ COMPAT_SYSCALL_DEFINE2(gettimeofday, struct compat_timeval __user *, tv,
COMPAT_SYSCALL_DEFINE2(settimeofday, struct compat_timeval __user *, tv,
struct timezone __user *, tz)
{
+ struct timespec64 new_ts;
struct timeval user_tv;
- struct timespec new_ts;
struct timezone new_tz;

if (tv) {
@@ -123,7 +123,7 @@ COMPAT_SYSCALL_DEFINE2(settimeofday, struct compat_timeval __user *, tv,
return -EFAULT;
}

- return do_sys_settimeofday(tv ? &new_ts : NULL, tz ? &new_tz : NULL);
+ return do_sys_settimeofday64(tv ? &new_ts : NULL, tz ? &new_tz : NULL);
}

static int __compat_get_timeval(struct timeval *tv, const struct compat_timeval __user *ctv)
@@ -240,18 +240,20 @@ COMPAT_SYSCALL_DEFINE2(nanosleep, struct compat_timespec __user *, rqtp,
struct compat_timespec __user *, rmtp)
{
struct timespec tu, rmt;
+ struct timespec64 tu64;
mm_segment_t oldfs;
long ret;

if (compat_get_timespec(&tu, rqtp))
return -EFAULT;

- if (!timespec_valid(&tu))
+ tu64 = timespec_to_timespec64(tu);
+ if (!timespec64_valid(&tu64))
return -EINVAL;

oldfs = get_fs();
set_fs(KERNEL_DS);
- ret = hrtimer_nanosleep(&tu,
+ ret = hrtimer_nanosleep(&tu64,
rmtp ? (struct timespec __user *)&rmt : NULL,
HRTIMER_MODE_REL, CLOCK_MONOTONIC);
set_fs(oldfs);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index acf0a5a06da7..08637699a862 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1176,6 +1176,8 @@ static struct ctl_table kern_table[] = {
.maxlen = sizeof(unsigned int),
.mode = 0644,
.proc_handler = timer_migration_handler,
+ .extra1 = &zero,
+ .extra2 = &one,
},
#endif
#ifdef CONFIG_BPF_SYSCALL
diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
index ce3a31e8eb36..5cb5b0008d97 100644
--- a/kernel/time/alarmtimer.c
+++ b/kernel/time/alarmtimer.c
@@ -541,7 +541,7 @@ static enum alarmtimer_restart alarm_handle_timer(struct alarm *alarm,
*
* Returns the granularity of underlying alarm base clock
*/
-static int alarm_clock_getres(const clockid_t which_clock, struct timespec *tp)
+static int alarm_clock_getres(const clockid_t which_clock, struct timespec64 *tp)
{
if (!alarmtimer_get_rtcdev())
return -EINVAL;
@@ -558,14 +558,14 @@ static int alarm_clock_getres(const clockid_t which_clock, struct timespec *tp)
*
* Provides the underlying alarm base time.
*/
-static int alarm_clock_get(clockid_t which_clock, struct timespec *tp)
+static int alarm_clock_get(clockid_t which_clock, struct timespec64 *tp)
{
struct alarm_base *base = &alarm_bases[clock2alarm(which_clock)];

if (!alarmtimer_get_rtcdev())
return -EINVAL;

- *tp = ktime_to_timespec(base->gettime());
+ *tp = ktime_to_timespec64(base->gettime());
return 0;
}

@@ -598,19 +598,19 @@ static int alarm_timer_create(struct k_itimer *new_timer)
* Copies out the current itimerspec data
*/
static void alarm_timer_get(struct k_itimer *timr,
- struct itimerspec *cur_setting)
+ struct itimerspec64 *cur_setting)
{
ktime_t relative_expiry_time =
alarm_expires_remaining(&(timr->it.alarm.alarmtimer));

if (ktime_to_ns(relative_expiry_time) > 0) {
- cur_setting->it_value = ktime_to_timespec(relative_expiry_time);
+ cur_setting->it_value = ktime_to_timespec64(relative_expiry_time);
} else {
cur_setting->it_value.tv_sec = 0;
cur_setting->it_value.tv_nsec = 0;
}

- cur_setting->it_interval = ktime_to_timespec(timr->it.alarm.interval);
+ cur_setting->it_interval = ktime_to_timespec64(timr->it.alarm.interval);
}

/**
@@ -640,8 +640,8 @@ static int alarm_timer_del(struct k_itimer *timr)
* Sets the timer to new_setting, and starts the timer.
*/
static int alarm_timer_set(struct k_itimer *timr, int flags,
- struct itimerspec *new_setting,
- struct itimerspec *old_setting)
+ struct itimerspec64 *new_setting,
+ struct itimerspec64 *old_setting)
{
ktime_t exp;

@@ -659,8 +659,8 @@ static int alarm_timer_set(struct k_itimer *timr, int flags,
return TIMER_RETRY;

/* start the timer */
- timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval);
- exp = timespec_to_ktime(new_setting->it_value);
+ timr->it.alarm.interval = timespec64_to_ktime(new_setting->it_interval);
+ exp = timespec64_to_ktime(new_setting->it_value);
/* Convert (if necessary) to absolute time */
if (flags != TIMER_ABSTIME) {
ktime_t now;
@@ -790,13 +790,14 @@ static long __sched alarm_timer_nsleep_restart(struct restart_block *restart)
* Handles clock_nanosleep calls against _ALARM clockids
*/
static int alarm_timer_nsleep(const clockid_t which_clock, int flags,
- struct timespec *tsreq, struct timespec __user *rmtp)
+ struct timespec64 *tsreq,
+ struct timespec __user *rmtp)
{
enum alarmtimer_type type = clock2alarm(which_clock);
+ struct restart_block *restart;
struct alarm alarm;
ktime_t exp;
int ret = 0;
- struct restart_block *restart;

if (!alarmtimer_get_rtcdev())
return -ENOTSUPP;
@@ -809,7 +810,7 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags,

alarm_init(&alarm, type, alarmtimer_nsleep_wakeup);

- exp = timespec_to_ktime(*tsreq);
+ exp = timespec64_to_ktime(*tsreq);
/* Convert (if necessary) to absolute time */
if (flags != TIMER_ABSTIME) {
ktime_t now = alarm_bases[type].gettime();
diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
index 97ac0951f164..4237e0744e26 100644
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -468,7 +468,7 @@ void clockevents_register_device(struct clock_event_device *dev)
}
EXPORT_SYMBOL_GPL(clockevents_register_device);

-void clockevents_config(struct clock_event_device *dev, u32 freq)
+static void clockevents_config(struct clock_event_device *dev, u32 freq)
{
u64 sec;

diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index ec08f527d7ee..a7560123617c 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1368,10 +1368,7 @@ void hrtimer_interrupt(struct clock_event_device *dev)
ktime_to_ns(delta));
}

-/*
- * local version of hrtimer_peek_ahead_timers() called with interrupts
- * disabled.
- */
+/* called with interrupts disabled */
static inline void __hrtimer_peek_ahead_timers(void)
{
struct tick_device *td;
@@ -1506,7 +1503,7 @@ long __sched hrtimer_nanosleep_restart(struct restart_block *restart)
return ret;
}

-long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp,
+long hrtimer_nanosleep(struct timespec64 *rqtp, struct timespec __user *rmtp,
const enum hrtimer_mode mode, const clockid_t clockid)
{
struct restart_block *restart;
@@ -1519,7 +1516,7 @@ long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp,
slack = 0;

hrtimer_init_on_stack(&t.timer, clockid, mode);
- hrtimer_set_expires_range_ns(&t.timer, timespec_to_ktime(*rqtp), slack);
+ hrtimer_set_expires_range_ns(&t.timer, timespec64_to_ktime(*rqtp), slack);
if (do_nanosleep(&t, mode))
goto out;

@@ -1550,15 +1547,17 @@ long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp,
SYSCALL_DEFINE2(nanosleep, struct timespec __user *, rqtp,
struct timespec __user *, rmtp)
{
+ struct timespec64 tu64;
struct timespec tu;

if (copy_from_user(&tu, rqtp, sizeof(tu)))
return -EFAULT;

- if (!timespec_valid(&tu))
+ tu64 = timespec_to_timespec64(tu);
+ if (!timespec64_valid(&tu64))
return -EINVAL;

- return hrtimer_nanosleep(&tu, rmtp, HRTIMER_MODE_REL, CLOCK_MONOTONIC);
+ return hrtimer_nanosleep(&tu64, rmtp, HRTIMER_MODE_REL, CLOCK_MONOTONIC);
}

/*
diff --git a/kernel/time/posix-clock.c b/kernel/time/posix-clock.c
index 9cff0ab82b63..31d588d37a17 100644
--- a/kernel/time/posix-clock.c
+++ b/kernel/time/posix-clock.c
@@ -297,7 +297,7 @@ static int pc_clock_adjtime(clockid_t id, struct timex *tx)
return err;
}

-static int pc_clock_gettime(clockid_t id, struct timespec *ts)
+static int pc_clock_gettime(clockid_t id, struct timespec64 *ts)
{
struct posix_clock_desc cd;
int err;
@@ -316,7 +316,7 @@ static int pc_clock_gettime(clockid_t id, struct timespec *ts)
return err;
}

-static int pc_clock_getres(clockid_t id, struct timespec *ts)
+static int pc_clock_getres(clockid_t id, struct timespec64 *ts)
{
struct posix_clock_desc cd;
int err;
@@ -335,7 +335,7 @@ static int pc_clock_getres(clockid_t id, struct timespec *ts)
return err;
}

-static int pc_clock_settime(clockid_t id, const struct timespec *ts)
+static int pc_clock_settime(clockid_t id, const struct timespec64 *ts)
{
struct posix_clock_desc cd;
int err;
@@ -399,7 +399,7 @@ static int pc_timer_delete(struct k_itimer *kit)
return err;
}

-static void pc_timer_gettime(struct k_itimer *kit, struct itimerspec *ts)
+static void pc_timer_gettime(struct k_itimer *kit, struct itimerspec64 *ts)
{
clockid_t id = kit->it_clock;
struct posix_clock_desc cd;
@@ -414,7 +414,7 @@ static void pc_timer_gettime(struct k_itimer *kit, struct itimerspec *ts)
}

static int pc_timer_settime(struct k_itimer *kit, int flags,
- struct itimerspec *ts, struct itimerspec *old)
+ struct itimerspec64 *ts, struct itimerspec64 *old)
{
clockid_t id = kit->it_clock;
struct posix_clock_desc cd;
diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
index 4513ad16a253..949e434d3536 100644
--- a/kernel/time/posix-cpu-timers.c
+++ b/kernel/time/posix-cpu-timers.c
@@ -116,7 +116,7 @@ static inline u64 virt_ticks(struct task_struct *p)
}

static int
-posix_cpu_clock_getres(const clockid_t which_clock, struct timespec *tp)
+posix_cpu_clock_getres(const clockid_t which_clock, struct timespec64 *tp)
{
int error = check_clock(which_clock);
if (!error) {
@@ -135,7 +135,7 @@ posix_cpu_clock_getres(const clockid_t which_clock, struct timespec *tp)
}

static int
-posix_cpu_clock_set(const clockid_t which_clock, const struct timespec *tp)
+posix_cpu_clock_set(const clockid_t which_clock, const struct timespec64 *tp)
{
/*
* You can never reset a CPU clock, but we check for other errors
@@ -261,7 +261,7 @@ static int cpu_clock_sample_group(const clockid_t which_clock,

static int posix_cpu_clock_get_task(struct task_struct *tsk,
const clockid_t which_clock,
- struct timespec *tp)
+ struct timespec64 *tp)
{
int err = -EINVAL;
u64 rtn;
@@ -275,13 +275,13 @@ static int posix_cpu_clock_get_task(struct task_struct *tsk,
}

if (!err)
- *tp = ns_to_timespec(rtn);
+ *tp = ns_to_timespec64(rtn);

return err;
}


-static int posix_cpu_clock_get(const clockid_t which_clock, struct timespec *tp)
+static int posix_cpu_clock_get(const clockid_t which_clock, struct timespec64 *tp)
{
const pid_t pid = CPUCLOCK_PID(which_clock);
int err = -EINVAL;
@@ -562,7 +562,7 @@ static int cpu_timer_sample_group(const clockid_t which_clock,
* and try again. (This happens when the timer is in the middle of firing.)
*/
static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
- struct itimerspec *new, struct itimerspec *old)
+ struct itimerspec64 *new, struct itimerspec64 *old)
{
unsigned long flags;
struct sighand_struct *sighand;
@@ -572,7 +572,7 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,

WARN_ON_ONCE(p == NULL);

- new_expires = timespec_to_ns(&new->it_value);
+ new_expires = timespec64_to_ns(&new->it_value);

/*
* Protect against sighand release/switch in exit/exec and p->cpu_timers
@@ -633,7 +633,7 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
bump_cpu_timer(timer, val);
if (val < timer->it.cpu.expires) {
old_expires = timer->it.cpu.expires - val;
- old->it_value = ns_to_timespec(old_expires);
+ old->it_value = ns_to_timespec64(old_expires);
} else {
old->it_value.tv_nsec = 1;
old->it_value.tv_sec = 0;
@@ -671,7 +671,7 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
* Install the new reload setting, and
* set up the signal and overrun bookkeeping.
*/
- timer->it.cpu.incr = timespec_to_ns(&new->it_interval);
+ timer->it.cpu.incr = timespec64_to_ns(&new->it_interval);

/*
* This acts as a modification timestamp for the timer,
@@ -695,12 +695,12 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
ret = 0;
out:
if (old)
- old->it_interval = ns_to_timespec(old_incr);
+ old->it_interval = ns_to_timespec64(old_incr);

return ret;
}

-static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec *itp)
+static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec64 *itp)
{
u64 now;
struct task_struct *p = timer->it.cpu.task;
@@ -710,7 +710,7 @@ static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec *itp)
/*
* Easy part: convert the reload time.
*/
- itp->it_interval = ns_to_timespec(timer->it.cpu.incr);
+ itp->it_interval = ns_to_timespec64(timer->it.cpu.incr);

if (timer->it.cpu.expires == 0) { /* Timer not armed at all. */
itp->it_value.tv_sec = itp->it_value.tv_nsec = 0;
@@ -739,7 +739,7 @@ static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec *itp)
* Call the timer disarmed, nothing else to do.
*/
timer->it.cpu.expires = 0;
- itp->it_value = ns_to_timespec(timer->it.cpu.expires);
+ itp->it_value = ns_to_timespec64(timer->it.cpu.expires);
return;
} else {
cpu_timer_sample_group(timer->it_clock, p, &now);
@@ -748,7 +748,7 @@ static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec *itp)
}

if (now < timer->it.cpu.expires) {
- itp->it_value = ns_to_timespec(timer->it.cpu.expires - now);
+ itp->it_value = ns_to_timespec64(timer->it.cpu.expires - now);
} else {
/*
* The timer should have expired already, but the firing
@@ -825,6 +825,8 @@ static void check_thread_timers(struct task_struct *tsk,
* At the hard limit, we just die.
* No need to calculate anything else now.
*/
+ pr_info("CPU Watchdog Timeout (hard): %s[%d]\n",
+ tsk->comm, task_pid_nr(tsk));
__group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk);
return;
}
@@ -836,8 +838,7 @@ static void check_thread_timers(struct task_struct *tsk,
soft += USEC_PER_SEC;
sig->rlim[RLIMIT_RTTIME].rlim_cur = soft;
}
- printk(KERN_INFO
- "RT Watchdog Timeout: %s[%d]\n",
+ pr_info("RT Watchdog Timeout (soft): %s[%d]\n",
tsk->comm, task_pid_nr(tsk));
__group_send_sig_info(SIGXCPU, SEND_SIG_PRIV, tsk);
}
@@ -935,6 +936,8 @@ static void check_process_timers(struct task_struct *tsk,
* At the hard limit, we just die.
* No need to calculate anything else now.
*/
+ pr_info("RT Watchdog Timeout (hard): %s[%d]\n",
+ tsk->comm, task_pid_nr(tsk));
__group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk);
return;
}
@@ -942,6 +945,8 @@ static void check_process_timers(struct task_struct *tsk,
/*
* At the soft limit, send a SIGXCPU every second.
*/
+ pr_info("CPU Watchdog Timeout (soft): %s[%d]\n",
+ tsk->comm, task_pid_nr(tsk));
__group_send_sig_info(SIGXCPU, SEND_SIG_PRIV, tsk);
if (soft < hard) {
soft++;
@@ -1214,7 +1219,7 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clock_idx,
}

static int do_cpu_nanosleep(const clockid_t which_clock, int flags,
- struct timespec *rqtp, struct itimerspec *it)
+ struct timespec64 *rqtp, struct itimerspec64 *it)
{
struct k_itimer timer;
int error;
@@ -1229,7 +1234,7 @@ static int do_cpu_nanosleep(const clockid_t which_clock, int flags,
error = posix_cpu_timer_create(&timer);
timer.it_process = current;
if (!error) {
- static struct itimerspec zero_it;
+ static struct itimerspec64 zero_it;

memset(it, 0, sizeof *it);
it->it_value = *rqtp;
@@ -1264,7 +1269,7 @@ static int do_cpu_nanosleep(const clockid_t which_clock, int flags,
/*
* We were interrupted by a signal.
*/
- *rqtp = ns_to_timespec(timer.it.cpu.expires);
+ *rqtp = ns_to_timespec64(timer.it.cpu.expires);
error = posix_cpu_timer_set(&timer, 0, &zero_it, it);
if (!error) {
/*
@@ -1301,10 +1306,11 @@ static int do_cpu_nanosleep(const clockid_t which_clock, int flags,
static long posix_cpu_nsleep_restart(struct restart_block *restart_block);

static int posix_cpu_nsleep(const clockid_t which_clock, int flags,
- struct timespec *rqtp, struct timespec __user *rmtp)
+ struct timespec64 *rqtp, struct timespec __user *rmtp)
{
struct restart_block *restart_block = &current->restart_block;
- struct itimerspec it;
+ struct itimerspec64 it;
+ struct timespec ts;
int error;

/*
@@ -1324,13 +1330,14 @@ static int posix_cpu_nsleep(const clockid_t which_clock, int flags,
/*
* Report back to the user the time still remaining.
*/
- if (rmtp && copy_to_user(rmtp, &it.it_value, sizeof *rmtp))
+ ts = timespec64_to_timespec(it.it_value);
+ if (rmtp && copy_to_user(rmtp, &ts, sizeof(*rmtp)))
return -EFAULT;

restart_block->fn = posix_cpu_nsleep_restart;
restart_block->nanosleep.clockid = which_clock;
restart_block->nanosleep.rmtp = rmtp;
- restart_block->nanosleep.expires = timespec_to_ns(rqtp);
+ restart_block->nanosleep.expires = timespec64_to_ns(rqtp);
}
return error;
}
@@ -1338,11 +1345,12 @@ static int posix_cpu_nsleep(const clockid_t which_clock, int flags,
static long posix_cpu_nsleep_restart(struct restart_block *restart_block)
{
clockid_t which_clock = restart_block->nanosleep.clockid;
- struct timespec t;
- struct itimerspec it;
+ struct itimerspec64 it;
+ struct timespec64 t;
+ struct timespec tmp;
int error;

- t = ns_to_timespec(restart_block->nanosleep.expires);
+ t = ns_to_timespec64(restart_block->nanosleep.expires);

error = do_cpu_nanosleep(which_clock, TIMER_ABSTIME, &t, &it);

@@ -1351,10 +1359,11 @@ static long posix_cpu_nsleep_restart(struct restart_block *restart_block)
/*
* Report back to the user the time still remaining.
*/
- if (rmtp && copy_to_user(rmtp, &it.it_value, sizeof *rmtp))
+ tmp = timespec64_to_timespec(it.it_value);
+ if (rmtp && copy_to_user(rmtp, &tmp, sizeof(*rmtp)))
return -EFAULT;

- restart_block->nanosleep.expires = timespec_to_ns(&t);
+ restart_block->nanosleep.expires = timespec64_to_ns(&t);
}
return error;

@@ -1364,12 +1373,12 @@ static long posix_cpu_nsleep_restart(struct restart_block *restart_block)
#define THREAD_CLOCK MAKE_THREAD_CPUCLOCK(0, CPUCLOCK_SCHED)

static int process_cpu_clock_getres(const clockid_t which_clock,
- struct timespec *tp)
+ struct timespec64 *tp)
{
return posix_cpu_clock_getres(PROCESS_CLOCK, tp);
}
static int process_cpu_clock_get(const clockid_t which_clock,
- struct timespec *tp)
+ struct timespec64 *tp)
{
return posix_cpu_clock_get(PROCESS_CLOCK, tp);
}
@@ -1379,7 +1388,7 @@ static int process_cpu_timer_create(struct k_itimer *timer)
return posix_cpu_timer_create(timer);
}
static int process_cpu_nsleep(const clockid_t which_clock, int flags,
- struct timespec *rqtp,
+ struct timespec64 *rqtp,
struct timespec __user *rmtp)
{
return posix_cpu_nsleep(PROCESS_CLOCK, flags, rqtp, rmtp);
@@ -1389,12 +1398,12 @@ static long process_cpu_nsleep_restart(struct restart_block *restart_block)
return -EINVAL;
}
static int thread_cpu_clock_getres(const clockid_t which_clock,
- struct timespec *tp)
+ struct timespec64 *tp)
{
return posix_cpu_clock_getres(THREAD_CLOCK, tp);
}
static int thread_cpu_clock_get(const clockid_t which_clock,
- struct timespec *tp)
+ struct timespec64 *tp)
{
return posix_cpu_clock_get(THREAD_CLOCK, tp);
}
diff --git a/kernel/time/posix-stubs.c b/kernel/time/posix-stubs.c
index cd6716e115e8..c0cd53eb018a 100644
--- a/kernel/time/posix-stubs.c
+++ b/kernel/time/posix-stubs.c
@@ -49,26 +49,32 @@ SYS_NI(alarm);
SYSCALL_DEFINE2(clock_settime, const clockid_t, which_clock,
const struct timespec __user *, tp)
{
+ struct timespec64 new_tp64;
struct timespec new_tp;

if (which_clock != CLOCK_REALTIME)
return -EINVAL;
if (copy_from_user(&new_tp, tp, sizeof (*tp)))
return -EFAULT;
- return do_sys_settimeofday(&new_tp, NULL);
+
+ new_tp64 = timespec_to_timespec64(new_tp);
+ return do_sys_settimeofday64(&new_tp64, NULL);
}

SYSCALL_DEFINE2(clock_gettime, const clockid_t, which_clock,
struct timespec __user *,tp)
{
+ struct timespec64 kernel_tp64;
struct timespec kernel_tp;

switch (which_clock) {
- case CLOCK_REALTIME: ktime_get_real_ts(&kernel_tp); break;
- case CLOCK_MONOTONIC: ktime_get_ts(&kernel_tp); break;
- case CLOCK_BOOTTIME: get_monotonic_boottime(&kernel_tp); break;
+ case CLOCK_REALTIME: ktime_get_real_ts64(&kernel_tp64); break;
+ case CLOCK_MONOTONIC: ktime_get_ts64(&kernel_tp64); break;
+ case CLOCK_BOOTTIME: get_monotonic_boottime64(&kernel_tp64); break;
default: return -EINVAL;
}
+
+ kernel_tp = timespec64_to_timespec(kernel_tp64);
if (copy_to_user(tp, &kernel_tp, sizeof (kernel_tp)))
return -EFAULT;
return 0;
@@ -97,6 +103,7 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
const struct timespec __user *, rqtp,
struct timespec __user *, rmtp)
{
+ struct timespec64 t64;
struct timespec t;

switch (which_clock) {
@@ -105,9 +112,10 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
case CLOCK_BOOTTIME:
if (copy_from_user(&t, rqtp, sizeof (struct timespec)))
return -EFAULT;
- if (!timespec_valid(&t))
+ t64 = timespec_to_timespec64(t);
+ if (!timespec64_valid(&t64))
return -EINVAL;
- return hrtimer_nanosleep(&t, rmtp, flags & TIMER_ABSTIME ?
+ return hrtimer_nanosleep(&t64, rmtp, flags & TIMER_ABSTIME ?
HRTIMER_MODE_ABS : HRTIMER_MODE_REL,
which_clock);
default:
diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
index 50a6a47020de..4d7b2ce09c27 100644
--- a/kernel/time/posix-timers.c
+++ b/kernel/time/posix-timers.c
@@ -130,12 +130,12 @@ static struct k_clock posix_clocks[MAX_CLOCKS];
/*
* These ones are defined below.
*/
-static int common_nsleep(const clockid_t, int flags, struct timespec *t,
+static int common_nsleep(const clockid_t, int flags, struct timespec64 *t,
struct timespec __user *rmtp);
static int common_timer_create(struct k_itimer *new_timer);
-static void common_timer_get(struct k_itimer *, struct itimerspec *);
+static void common_timer_get(struct k_itimer *, struct itimerspec64 *);
static int common_timer_set(struct k_itimer *, int,
- struct itimerspec *, struct itimerspec *);
+ struct itimerspec64 *, struct itimerspec64 *);
static int common_timer_del(struct k_itimer *timer);

static enum hrtimer_restart posix_timer_fn(struct hrtimer *data);
@@ -204,17 +204,17 @@ static inline void unlock_timer(struct k_itimer *timr, unsigned long flags)
}

/* Get clock_realtime */
-static int posix_clock_realtime_get(clockid_t which_clock, struct timespec *tp)
+static int posix_clock_realtime_get(clockid_t which_clock, struct timespec64 *tp)
{
- ktime_get_real_ts(tp);
+ ktime_get_real_ts64(tp);
return 0;
}

/* Set clock_realtime */
static int posix_clock_realtime_set(const clockid_t which_clock,
- const struct timespec *tp)
+ const struct timespec64 *tp)
{
- return do_sys_settimeofday(tp, NULL);
+ return do_sys_settimeofday64(tp, NULL);
}

static int posix_clock_realtime_adj(const clockid_t which_clock,
@@ -226,54 +226,54 @@ static int posix_clock_realtime_adj(const clockid_t which_clock,
/*
* Get monotonic time for posix timers
*/
-static int posix_ktime_get_ts(clockid_t which_clock, struct timespec *tp)
+static int posix_ktime_get_ts(clockid_t which_clock, struct timespec64 *tp)
{
- ktime_get_ts(tp);
+ ktime_get_ts64(tp);
return 0;
}

/*
* Get monotonic-raw time for posix timers
*/
-static int posix_get_monotonic_raw(clockid_t which_clock, struct timespec *tp)
+static int posix_get_monotonic_raw(clockid_t which_clock, struct timespec64 *tp)
{
- getrawmonotonic(tp);
+ getrawmonotonic64(tp);
return 0;
}


-static int posix_get_realtime_coarse(clockid_t which_clock, struct timespec *tp)
+static int posix_get_realtime_coarse(clockid_t which_clock, struct timespec64 *tp)
{
- *tp = current_kernel_time();
+ *tp = current_kernel_time64();
return 0;
}

static int posix_get_monotonic_coarse(clockid_t which_clock,
- struct timespec *tp)
+ struct timespec64 *tp)
{
- *tp = get_monotonic_coarse();
+ *tp = get_monotonic_coarse64();
return 0;
}

-static int posix_get_coarse_res(const clockid_t which_clock, struct timespec *tp)
+static int posix_get_coarse_res(const clockid_t which_clock, struct timespec64 *tp)
{
- *tp = ktime_to_timespec(KTIME_LOW_RES);
+ *tp = ktime_to_timespec64(KTIME_LOW_RES);
return 0;
}

-static int posix_get_boottime(const clockid_t which_clock, struct timespec *tp)
+static int posix_get_boottime(const clockid_t which_clock, struct timespec64 *tp)
{
- get_monotonic_boottime(tp);
+ get_monotonic_boottime64(tp);
return 0;
}

-static int posix_get_tai(clockid_t which_clock, struct timespec *tp)
+static int posix_get_tai(clockid_t which_clock, struct timespec64 *tp)
{
- timekeeping_clocktai(tp);
+ timekeeping_clocktai64(tp);
return 0;
}

-static int posix_get_hrtimer_res(clockid_t which_clock, struct timespec *tp)
+static int posix_get_hrtimer_res(clockid_t which_clock, struct timespec64 *tp)
{
tp->tv_sec = 0;
tp->tv_nsec = hrtimer_resolution;
@@ -734,18 +734,18 @@ static struct k_itimer *__lock_timer(timer_t timer_id, unsigned long *flags)
* report.
*/
static void
-common_timer_get(struct k_itimer *timr, struct itimerspec *cur_setting)
+common_timer_get(struct k_itimer *timr, struct itimerspec64 *cur_setting)
{
ktime_t now, remaining, iv;
struct hrtimer *timer = &timr->it.real.timer;

- memset(cur_setting, 0, sizeof(struct itimerspec));
+ memset(cur_setting, 0, sizeof(*cur_setting));

iv = timr->it.real.interval;

/* interval timer ? */
if (iv)
- cur_setting->it_interval = ktime_to_timespec(iv);
+ cur_setting->it_interval = ktime_to_timespec64(iv);
else if (!hrtimer_active(timer) &&
(timr->it_sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE)
return;
@@ -771,13 +771,14 @@ common_timer_get(struct k_itimer *timr, struct itimerspec *cur_setting)
if ((timr->it_sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE)
cur_setting->it_value.tv_nsec = 1;
} else
- cur_setting->it_value = ktime_to_timespec(remaining);
+ cur_setting->it_value = ktime_to_timespec64(remaining);
}

/* Get the time remaining on a POSIX.1b interval timer. */
SYSCALL_DEFINE2(timer_gettime, timer_t, timer_id,
struct itimerspec __user *, setting)
{
+ struct itimerspec64 cur_setting64;
struct itimerspec cur_setting;
struct k_itimer *timr;
struct k_clock *kc;
@@ -792,10 +793,11 @@ SYSCALL_DEFINE2(timer_gettime, timer_t, timer_id,
if (WARN_ON_ONCE(!kc || !kc->timer_get))
ret = -EINVAL;
else
- kc->timer_get(timr, &cur_setting);
+ kc->timer_get(timr, &cur_setting64);

unlock_timer(timr, flags);

+ cur_setting = itimerspec64_to_itimerspec(&cur_setting64);
if (!ret && copy_to_user(setting, &cur_setting, sizeof (cur_setting)))
return -EFAULT;

@@ -831,7 +833,7 @@ SYSCALL_DEFINE1(timer_getoverrun, timer_t, timer_id)
/* timr->it_lock is taken. */
static int
common_timer_set(struct k_itimer *timr, int flags,
- struct itimerspec *new_setting, struct itimerspec *old_setting)
+ struct itimerspec64 *new_setting, struct itimerspec64 *old_setting)
{
struct hrtimer *timer = &timr->it.real.timer;
enum hrtimer_mode mode;
@@ -860,10 +862,10 @@ common_timer_set(struct k_itimer *timr, int flags,
hrtimer_init(&timr->it.real.timer, timr->it_clock, mode);
timr->it.real.timer.function = posix_timer_fn;

- hrtimer_set_expires(timer, timespec_to_ktime(new_setting->it_value));
+ hrtimer_set_expires(timer, timespec64_to_ktime(new_setting->it_value));

/* Convert interval */
- timr->it.real.interval = timespec_to_ktime(new_setting->it_interval);
+ timr->it.real.interval = timespec64_to_ktime(new_setting->it_interval);

/* SIGEV_NONE timers are not queued ! See common_timer_get */
if (((timr->it_sigev_notify & ~SIGEV_THREAD_ID) == SIGEV_NONE)) {
@@ -883,21 +885,23 @@ SYSCALL_DEFINE4(timer_settime, timer_t, timer_id, int, flags,
const struct itimerspec __user *, new_setting,
struct itimerspec __user *, old_setting)
{
- struct k_itimer *timr;
+ struct itimerspec64 new_spec64, old_spec64;
+ struct itimerspec64 *rtn = old_setting ? &old_spec64 : NULL;
struct itimerspec new_spec, old_spec;
- int error = 0;
+ struct k_itimer *timr;
unsigned long flag;
- struct itimerspec *rtn = old_setting ? &old_spec : NULL;
struct k_clock *kc;
+ int error = 0;

if (!new_setting)
return -EINVAL;

if (copy_from_user(&new_spec, new_setting, sizeof (new_spec)))
return -EFAULT;
+ new_spec64 = itimerspec_to_itimerspec64(&new_spec);

- if (!timespec_valid(&new_spec.it_interval) ||
- !timespec_valid(&new_spec.it_value))
+ if (!timespec64_valid(&new_spec64.it_interval) ||
+ !timespec64_valid(&new_spec64.it_value))
return -EINVAL;
retry:
timr = lock_timer(timer_id, &flag);
@@ -908,7 +912,7 @@ SYSCALL_DEFINE4(timer_settime, timer_t, timer_id, int, flags,
if (WARN_ON_ONCE(!kc || !kc->timer_set))
error = -EINVAL;
else
- error = kc->timer_set(timr, flags, &new_spec, rtn);
+ error = kc->timer_set(timr, flags, &new_spec64, rtn);

unlock_timer(timr, flag);
if (error == TIMER_RETRY) {
@@ -916,6 +920,7 @@ SYSCALL_DEFINE4(timer_settime, timer_t, timer_id, int, flags,
goto retry;
}

+ old_spec = itimerspec64_to_itimerspec(&old_spec64);
if (old_setting && !error &&
copy_to_user(old_setting, &old_spec, sizeof (old_spec)))
error = -EFAULT;
@@ -1014,6 +1019,7 @@ SYSCALL_DEFINE2(clock_settime, const clockid_t, which_clock,
const struct timespec __user *, tp)
{
struct k_clock *kc = clockid_to_kclock(which_clock);
+ struct timespec64 new_tp64;
struct timespec new_tp;

if (!kc || !kc->clock_set)
@@ -1021,21 +1027,24 @@ SYSCALL_DEFINE2(clock_settime, const clockid_t, which_clock,

if (copy_from_user(&new_tp, tp, sizeof (*tp)))
return -EFAULT;
+ new_tp64 = timespec_to_timespec64(new_tp);

- return kc->clock_set(which_clock, &new_tp);
+ return kc->clock_set(which_clock, &new_tp64);
}

SYSCALL_DEFINE2(clock_gettime, const clockid_t, which_clock,
struct timespec __user *,tp)
{
struct k_clock *kc = clockid_to_kclock(which_clock);
+ struct timespec64 kernel_tp64;
struct timespec kernel_tp;
int error;

if (!kc)
return -EINVAL;

- error = kc->clock_get(which_clock, &kernel_tp);
+ error = kc->clock_get(which_clock, &kernel_tp64);
+ kernel_tp = timespec64_to_timespec(kernel_tp64);

if (!error && copy_to_user(tp, &kernel_tp, sizeof (kernel_tp)))
error = -EFAULT;
@@ -1070,13 +1079,15 @@ SYSCALL_DEFINE2(clock_getres, const clockid_t, which_clock,
struct timespec __user *, tp)
{
struct k_clock *kc = clockid_to_kclock(which_clock);
+ struct timespec64 rtn_tp64;
struct timespec rtn_tp;
int error;

if (!kc)
return -EINVAL;

- error = kc->clock_getres(which_clock, &rtn_tp);
+ error = kc->clock_getres(which_clock, &rtn_tp64);
+ rtn_tp = timespec64_to_timespec(rtn_tp64);

if (!error && tp && copy_to_user(tp, &rtn_tp, sizeof (rtn_tp)))
error = -EFAULT;
@@ -1088,7 +1099,7 @@ SYSCALL_DEFINE2(clock_getres, const clockid_t, which_clock,
* nanosleep for monotonic and realtime clocks
*/
static int common_nsleep(const clockid_t which_clock, int flags,
- struct timespec *tsave, struct timespec __user *rmtp)
+ struct timespec64 *tsave, struct timespec __user *rmtp)
{
return hrtimer_nanosleep(tsave, rmtp, flags & TIMER_ABSTIME ?
HRTIMER_MODE_ABS : HRTIMER_MODE_REL,
@@ -1100,6 +1111,7 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
struct timespec __user *, rmtp)
{
struct k_clock *kc = clockid_to_kclock(which_clock);
+ struct timespec64 t64;
struct timespec t;

if (!kc)
@@ -1110,10 +1122,11 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
if (copy_from_user(&t, rqtp, sizeof (struct timespec)))
return -EFAULT;

- if (!timespec_valid(&t))
+ t64 = timespec_to_timespec64(t);
+ if (!timespec64_valid(&t64))
return -EINVAL;

- return kc->nsleep(which_clock, flags, &t, rmtp);
+ return kc->nsleep(which_clock, flags, &t64, rmtp);
}

/*
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index ea6b610c4c57..2d8f05aad442 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -206,6 +206,11 @@ sched_clock_register(u64 (*read)(void), int bits, unsigned long rate)

update_clock_read_data(&rd);

+ if (sched_clock_timer.function != NULL) {
+ /* update timeout for clock wrap */
+ hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL);
+ }
+
r = rate;
if (r >= 4000000) {
r /= 1000000;
diff --git a/kernel/time/time.c b/kernel/time/time.c
index 25bdd2504571..6574bba44b55 100644
--- a/kernel/time/time.c
+++ b/kernel/time/time.c
@@ -193,8 +193,8 @@ int do_sys_settimeofday64(const struct timespec64 *tv, const struct timezone *tz
SYSCALL_DEFINE2(settimeofday, struct timeval __user *, tv,
struct timezone __user *, tz)
{
+ struct timespec64 new_ts;
struct timeval user_tv;
- struct timespec new_ts;
struct timezone new_tz;

if (tv) {
@@ -212,7 +212,7 @@ SYSCALL_DEFINE2(settimeofday, struct timeval __user *, tv,
return -EFAULT;
}

- return do_sys_settimeofday(tv ? &new_ts : NULL, tz ? &new_tz : NULL);
+ return do_sys_settimeofday64(tv ? &new_ts : NULL, tz ? &new_tz : NULL);
}

SYSCALL_DEFINE1(adjtimex, struct timex __user *, txc_p)
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 5b63a2102c29..9652bc57fd09 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -996,8 +996,7 @@ static int adjust_historical_crosststamp(struct system_time_snapshot *history,
return 0;

/* Interpolate shortest distance from beginning or end of history */
- interp_forward = partial_history_cycles > total_history_cycles/2 ?
- true : false;
+ interp_forward = partial_history_cycles > total_history_cycles / 2;
partial_history_cycles = interp_forward ?
total_history_cycles - partial_history_cycles :
partial_history_cycles;
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 1dc0256bfb6e..cc6b6bdd1329 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -241,7 +241,7 @@ int timer_migration_handler(struct ctl_table *table, int write,
int ret;

mutex_lock(&mutex);
- ret = proc_dointvec(table, write, buffer, lenp, ppos);
+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
if (!ret && write)
timers_update_migration(false);
mutex_unlock(&mutex);
diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c
index ff8d5c13d04b..0e7f5428a148 100644
--- a/kernel/time/timer_list.c
+++ b/kernel/time/timer_list.c
@@ -16,6 +16,7 @@
#include <linux/sched.h>
#include <linux/seq_file.h>
#include <linux/kallsyms.h>
+#include <linux/nmi.h>

#include <linux/uaccess.h>

@@ -86,6 +87,9 @@ print_active_timers(struct seq_file *m, struct hrtimer_clock_base *base,

next_one:
i = 0;
+
+ touch_nmi_watchdog();
+
raw_spin_lock_irqsave(&base->cpu_base->lock, flags);

curr = timerqueue_getnext(&base->active);
@@ -197,6 +201,8 @@ print_tickdevice(struct seq_file *m, struct tick_device *td, int cpu)
{
struct clock_event_device *dev = td->evtdev;

+ touch_nmi_watchdog();
+
SEQ_printf(m, "Tick Device: mode: %d\n", td->mode);
if (cpu < 0)
SEQ_printf(m, "Broadcast device\n");