Commit 9a96263a authored by Jeffrey Lee's avatar Jeffrey Lee
Browse files

Make MMU_Changing ARMops perform the sub-operations in a sensible order

Detail:
  For a while we've known that the correct way of doing cache maintenance on ARMv6+ (e.g. when converting a page from cacheable to non-cacheable) is as follows:
  1. Write new page table entry
  2. Flush old entry from TLB
  3. Clean cache + drain write buffer
  The MMU_Changing ARMops (e.g. MMU_ChangingEntry) implement the last two items, but in the wrong order. This has caused the operations to fall out of favour and cease to be used, even in pre-ARMv6 code paths where the effects of improper cache/TLB management perhaps weren't as readily visible.
  This change re-specifies the relevant ARMops so that they perform their sub-operations in the correct order to make them useful on modern ARMs, updates the implementations, and updates the kernel to make use of the ops whereever relevant.
  File changes:
  - Docs/HAL/ARMop_API - Re-specify all the MMU_Changing ARMops to state that they are for use just after a page table entry has been changed (as opposed to before - e.g. 5.00 kernel behaviour). Re-specify the cacheable ones to state that the TLB invalidatation comes first.
  - s/ARM600, s/ChangeDyn, s/HAL, s/MemInfo, s/VMSAv6, s/AMBControl/memmap - Replace MMU_ChangingUncached + Cache_CleanInvalidate pairs with equivalent MMU_Changing op
  - s/ARMops - Update ARMop implementations to do everything in the correct order
  - s/MemMap2 - Update ARMop usage, and get rid of some lingering sledgehammer logic from ShuffleDoublyMappedRegionForGrow
Admin:
  Tested on pretty much everything currently supported


Version 5.70. Tagged as 'Kernel-5_70'
parent 2704c756
......@@ -569,40 +569,46 @@ MMU mapping ARMops
-- MMU_Changing
The global MMU mapping is about to be changed.
The global MMU mapping has just changed.
entry: -
exit: -
The operation must typically perform the following:
1) globally clean and invalidate all caches
2) drain write buffer
3) globally invalidate TLB or TLBs
1) globally invalidate TLB or TLBs
2) globally clean and invalidate all caches
3) drain write buffer
Note that it should not be necessary to disable IRQs. The OS ensures that
remappings do not affect currently active interrupts.
This operation should typically be used when a large number of cacheable pages
have had their attributes changed in a way which will affect cache behaviour.
-- MMU_ChangingEntry
The MMU mapping is about to be changed for a single page entry (4k).
The MMU mapping has just changed for a single page entry (4k).
entry: r0 = logical address of entry (page aligned)
exit: -
The operation must typically perform the following:
1) clean and invalidate all caches over the 4k range of the page
2) drain write buffer
3) invalidate TLB or TLBs for the entry
1) invalidate TLB or TLBs for the entry
2) clean and invalidate all caches over the 4k range of the page
3) drain write buffer
Note that it should not be necessary to disable IRQs. The OS ensures that
remappings do not affect currently active interrupts.
This operation should typically be used when a cacheable page has had its
attributes changed in a way which will affect cache behaviour.
-- MMU_ChangingUncached
The MMU mapping is about to be changed in a way that globally affects
uncacheable space.
The MMU mapping has just changed in a way that globally affects uncacheable
space.
entry: -
exit: -
......@@ -614,8 +620,7 @@ that operate in uncacheable space on some ARMs.
-- MMU_ChangingUncachedEntry
The MMU mapping is about to be changed for a single uncacheable page entry
(4k).
The MMU mapping has just changed for a single uncacheable page entry (4k).
entry: r0 = logical address of entry (page aligned)
exit: -
......@@ -628,8 +633,8 @@ buffers that operate in uncacheable space on some ARMs.
-- MMU_ChangingEntries
The MMU mapping is about to be changed for a contiguous range of page
entries (multiple of 4k).
The MMU mapping has just changed for a contiguous range of page entries
(multiple of 4k).
entry: r0 = logical address of first page entry (page aligned)
r1 = number of page entries ( >= 1)
......@@ -637,9 +642,9 @@ entries (multiple of 4k).
The operation must typically perform the following:
1) clean and invalidate all caches over the range of the pages
2) drain write buffer
3) invalidate TLB or TLBs over the range of the entries
1) invalidate TLB or TLBs over the range of the entries
2) clean and invalidate all caches over the range of the pages
3) drain write buffer
Note that it should not be necessary to disable IRQs. The OS ensures that
remappings do not affect currently active interrupts.
......@@ -648,10 +653,13 @@ Note that the number of entries may be large. The operation is typically
expected to use a reasonable threshold, above which it performs a global
operation instead for speed reasons.
This operation should typically be used when cacheable pages have had their
attributes changed in a way which will affect cache behaviour.
-- MMU_ChangingUncachedEntries
The MMU mapping is about to be changed for a contiguous range of uncacheable
page entries (multiple of 4k).
The MMU mapping has just changed for a contiguous range of uncacheable page
entries (multiple of 4k).
entry: r0 = logical address of first page entry (page aligned)
r1 = number of page entries ( >= 1)
......
......@@ -11,13 +11,13 @@
GBLS Module_HelpVersion
GBLS Module_ComponentName
GBLS Module_ComponentPath
Module_MajorVersion SETS "5.69"
Module_Version SETA 569
Module_MajorVersion SETS "5.70"
Module_Version SETA 570
Module_MinorVersion SETS ""
Module_Date SETS "13 Dec 2016"
Module_ApplicationDate SETS "13-Dec-16"
Module_ComponentName SETS "Kernel"
Module_ComponentPath SETS "castle/RiscOS/Sources/Kernel"
Module_FullVersion SETS "5.69"
Module_HelpVersion SETS "5.69 (13 Dec 2016)"
Module_FullVersion SETS "5.70"
Module_HelpVersion SETS "5.70 (13 Dec 2016)"
END
/* (5.69)
/* (5.70)
*
* This file is automatically maintained by srccommit, do not edit manually.
* Last processed by srccommit version: 1.1.
*
*/
#define Module_MajorVersion_CMHG 5.69
#define Module_MajorVersion_CMHG 5.70
#define Module_MinorVersion_CMHG
#define Module_Date_CMHG 13 Dec 2016
#define Module_MajorVersion "5.69"
#define Module_Version 569
#define Module_MajorVersion "5.70"
#define Module_Version 570
#define Module_MinorVersion ""
#define Module_Date "13 Dec 2016"
......@@ -18,6 +18,6 @@
#define Module_ComponentName "Kernel"
#define Module_ComponentPath "castle/RiscOS/Sources/Kernel"
#define Module_FullVersion "5.69"
#define Module_HelpVersion "5.69 (13 Dec 2016)"
#define Module_LibraryVersionInfo "5:69"
#define Module_FullVersion "5.70"
#define Module_HelpVersion "5.70 (13 Dec 2016)"
#define Module_LibraryVersionInfo "5:70"
......@@ -237,10 +237,7 @@ AMB_SetMemMapEntries_MapOut_Lazy ROUT
; Do cache/TLB maintenance
MOV r1,r4,LSL #Log2PageSize
ADD r0,r1,#ApplicationStart
ARMop MMU_ChangingUncachedEntry,,,r2 ;flush TLB
ADD r0,r1,#ApplicationStart
ADD r1,r1,#ApplicationStart+PageSize
ARMop Cache_CleanInvalidateRange,,,r2 ;flush from cache
ARMop MMU_ChangingEntry,,,r2
33
SUBS r3,r3,#PageSize
ADDEQ r5,r4,#1 ;mapped out all pages in slot; set end to current+1
......@@ -270,8 +267,7 @@ AMB_SetMemMapEntries_MapOut_Lazy ROUT
; Do global maintenance if required
CMP r6,#0
BLT %FT41
ARMop MMU_ChangingUncached,,,r2
ARMop Cache_CleanInvalidateAll,,,r2
ARMop MMU_Changing,,,r2
41
;
......@@ -843,11 +839,7 @@ AMB_movecacheablepagesout_L2PT
FRAMLDR r0,,r4 ;address of 1st page
FRAMLDR r1,,r8 ;number of pages
LDR r3,=ZeroPage
ARMop MMU_ChangingUncachedEntries,,,r3 ;flush TLB
FRAMLDR r0,,r4
FRAMLDR r1,,r8
ADD r1,r0,r1,LSL #Log2PageSize
ARMop Cache_CleanInvalidateRange,,,r3 ;flush from cache
ARMop MMU_ChangingEntries,,,r3
FRAMLDR r4
FRAMLDR r8
B %FT55 ; -> moveuncacheablepagesout_L2PT (avoid pop+push of large stack frame)
......
......@@ -192,16 +192,10 @@ BangL2PT ; internal entry point used only
BEQ %FT19
STR lr, [r1, r9, LSR #10] ;Update 2nd mapping too if required
ADD r0, r3, r9
ARMop MMU_ChangingUncachedEntry,,, r4 ; TLB flush
ADD r0, r3, r9
ADD r1, r0, #4096
ARMop Cache_CleanInvalidateRange,,, r4 ; Cache flush
ARMop MMU_ChangingEntry,,, r4
19
MOV r0, r3
ARMop MMU_ChangingUncachedEntry,,, r4 ; TLB flush
MOV r0, r3
ADD r1, r3, #4096
ARMop Cache_CleanInvalidateRange,,, r4 ; Cache flush
ARMop MMU_ChangingEntry,,, r4
LDR r1, =L2PT
20 STR r6, [r1, r3, LSR #10]! ;update L2PT entry
......
......@@ -1232,8 +1232,8 @@ TLB_InvalidateEntry_ARMv3
MOV pc, lr
MMU_Changing_ARMv3
MCR p15, 0, a1, c7, c0 ; invalidate cache
MCR p15, 0, a1, c5, c0 ; invalidate TLB
MCR p15, 0, a1, c7, c0 ; invalidate cache
MOV pc, lr
MMU_ChangingUncached_ARMv3
......@@ -1243,8 +1243,8 @@ MMU_ChangingUncached_ARMv3
; a1 = page affected (page aligned address)
;
MMU_ChangingEntry_ARMv3
MCR p15, 0, a1, c7, c0 ; invalidate cache
MCR p15, 0, a1, c6, c0 ; invalidate TLB entry
MCR p15, 0, a1, c7, c0 ; invalidate cache
MOV pc, lr
; a1 = first page affected (page aligned address)
......@@ -1254,12 +1254,12 @@ MMU_ChangingEntries_ARMv3 ROUT
CMP a2, #16 ; arbitrary-ish threshold
BHS MMU_Changing_ARMv3
Push "a2"
MCR p15, 0, a1, c7, c0 ; invalidate cache
10
MCR p15, 0, a1, c6, c0 ; invalidate TLB entry
SUBS a2, a2, #1 ; next page
ADD a1, a1, #PageSize
BNE %BT10
MCR p15, 0, a1, c7, c0 ; invalidate cache
Pull "a2"
MOV pc, lr
......@@ -1332,8 +1332,8 @@ TLB_InvalidateEntry_Unified
MMU_Changing_Writethrough
MOV a1, #0
MCR p15, 0, a1, c7, c7 ; invalidate cache
MCR p15, 0, a1, c8, c7 ; invalidate TLB
MCR p15, 0, a1, c7, c7 ; invalidate cache
MOV pc, lr
MMU_ChangingUncached
......@@ -1344,11 +1344,9 @@ MMU_ChangingUncached
; a1 = page affected (page aligned address)
;
MMU_ChangingEntry_Writethrough
Push "a4"
MOV a4, #0
MCR p15, 0, a4, c7, c7 ; invalidate cache
MCR p15, 0, a1, c8, c7, 1 ; invalidate TLB entry
Pull "a4"
MOV a1, #0
MCR p15, 0, a1, c7, c7 ; invalidate cache
MOV pc, lr
; a1 = first page affected (page aligned address)
......@@ -1357,15 +1355,14 @@ MMU_ChangingEntry_Writethrough
MMU_ChangingEntries_Writethrough ROUT
CMP a2, #16 ; arbitrary-ish threshold
BHS MMU_Changing_Writethrough
Push "a2,a4"
MOV a4, #0
MCR p15, 0, a4, c7, c7 ; invalidate cache
Push "a2"
10
MCR p15, 0, a1, c8, c7, 1 ; invalidate TLB entry
SUBS a2, a2, #1 ; next page
ADD a1, a1, #PageSize
BNE %BT10
Pull "a2,a4"
MCR p15, 0, a2, c7, c7 ; invalidate cache
Pull "a2"
MOV pc, lr
; a1 = page affected (page aligned address)
......@@ -1666,16 +1663,17 @@ IMB_List_WB_CR7_LDa ROUT
Pull "v1-v2,pc"
MMU_Changing_WB_CR7_LDa ROUT
Push "lr"
BL Cache_CleanInvalidateAll_WB_CR7_LDa
MOV a1, #0
MCR p15, 0, a1, c8, c7, 0 ; invalidate ITLB and DTLB
Pull "pc"
B Cache_CleanInvalidateAll_WB_CR7_LDa
; a1 = page affected (page aligned address)
;
MMU_ChangingEntry_WB_CR7_LDa ROUT
[ MEMM_Type = "ARM600"
Push "a2, lr"
MCR p15, 0, a1, c8, c6, 1 ; invalidate DTLB entry
MCR p15, 0, a1, c8, c5, 1 ; invalidate ITLB entry
ADD a2, a1, #PageSize
LDR lr, =ZeroPage
LDRB lr, [lr, #DCache_LineLen]
......@@ -1688,10 +1686,13 @@ MMU_ChangingEntry_WB_CR7_LDa ROUT
MOV lr, #0
MCR p15, 0, lr, c7, c10, 4 ; drain WBuffer
MCR p15, 0, a1, c7, c5, 6 ; flush branch predictors
SUB a1, a1, #PageSize
Pull "a2, pc"
|
; See above re: ARM11 cache cleaning not working on non-cacheable pages
MCR p15, 0, a1, c8, c6, 1 ; invalidate DTLB entry
MCR p15, 0, a1, c8, c5, 1 ; invalidate ITLB entry
Pull "a2, pc"
B Cache_CleanInvalidateAll_WB_CR7_LDa
]
; a1 = first page affected (page aligned address)
; a2 = number of pages
......@@ -1707,27 +1708,33 @@ MMU_ChangingEntries_WB_CR7_LDa ROUT
LDRB a3, [lr, #DCache_LineLen]
MOV lr, a1
10
MCR p15, 0, a1, c8, c6, 1 ; invalidate DTLB entry
MCR p15, 0, a1, c8, c5, 1 ; invalidate ITLB entry
ADD a1, a1, #PageSize
CMP a1, a2
BLO %BT10
[ MEMM_Type = "ARM600"
MOV a1, lr ; restore start address
20
MCR p15, 0, a1, c7, c14, 1 ; clean&invalidate DCache entry
MCR p15, 0, a1, c7, c5, 1 ; invalidate ICache entry
ADD a1, a1, a3
CMP a1, a2
BLO %BT10
BLO %BT20
MOV a1, #0
MCR p15, 0, a1, c7, c10, 4 ; drain WBuffer
MCR p15, 0, a1, c7, c5, 6 ; flush branch predictors
MOV a1, lr ; restore start address
20
MCR p15, 0, a1, c8, c6, 1 ; invalidate DTLB entry
MCR p15, 0, a1, c8, c5, 1 ; invalidate ITLB entry
ADD a1, a1, #PageSize
CMP a1, a2
BLO %BT20
Pull "a2, a3, pc"
;
|
; See above re: ARM11 cache cleaning not working on non-cacheable pages
B %FT40
]
30
BL Cache_CleanInvalidateAll_WB_CR7_LDa
MOV a1, #0
MCR p15, 0, a1, c8, c7, 0 ; invalidate ITLB and DTLB
40
BL Cache_CleanInvalidateAll_WB_CR7_LDa
Pull "a2, a3, pc"
; a1 = first page affected (page aligned address)
......@@ -1898,9 +1905,9 @@ IMB_List_WB_Crd ROUT
MMU_Changing_WB_Crd
Push "lr"
MCR p15, 0, a1, c8, c7, 0 ;flush ITLB and DTLB
BL Cache_CleanAll_WB_Crd ;clean DCache (wrt to non-interrupt stuff)
MCR p15, 0, a1, c7, c5, 0 ;flush ICache
MCR p15, 0, a1, c8, c7, 0 ;flush ITLB and DTLB
Pull "pc"
MMU_ChangingEntry_WB_Crd ROUT
......@@ -1914,6 +1921,8 @@ MMU_ChangingEntry_WB_Crd ROUT
ADD a2, a1, #PageSize
LDR lr, =ZeroPage
LDRB lr, [lr, #DCache_LineLen]
MCR p15, 0, a1, c8, c6, 1 ;flush DTLB entry
MCR p15, 0, a1, c8, c5, 0 ;flush ITLB
10
MCR p15, 0, a1, c7, c10, 1 ;clean DCache entry
MCR p15, 0, a1, c7, c6, 1 ;flush DCache entry
......@@ -1923,8 +1932,6 @@ MMU_ChangingEntry_WB_Crd ROUT
SUB a1, a1, #PageSize
MCR p15, 0, a1, c7, c10, 4 ;drain WBuffer
MCR p15, 0, a1, c7, c5, 0 ;flush ICache
MCR p15, 0, a1, c8, c6, 1 ;flush DTLB entry
MCR p15, 0, a1, c8, c5, 0 ;flush ITLB
Pull "a2, pc"
MMU_ChangingEntries_WB_Crd ROUT
......@@ -1941,26 +1948,26 @@ MMU_ChangingEntries_WB_Crd ROUT
LDRB a3, [lr, #DCache_LineLen]
MOV lr, a1
10
MCR p15, 0, a1, c7, c10, 1 ;clean DCache entry
MCR p15, 0, a1, c7, c6, 1 ;flush DCache entry
ADD a1, a1, a3
MCR p15, 0, a1, c8, c6, 1 ;flush DTLB entry
ADD a1, a1, #PageSize
CMP a1, a2
BLO %BT10
MCR p15, 0, a1, c7, c10, 4 ;drain WBuffer
MCR p15, 0, a1, c7, c5, 0 ;flush ICache
MCR p15, 0, a1, c8, c5, 0 ;flush ITLB
MOV a1, lr ;restore start address
20
MCR p15, 0, a1, c8, c6, 1 ;flush DTLB entry
ADD a1, a1, #PageSize
MCR p15, 0, a1, c7, c10, 1 ;clean DCache entry
MCR p15, 0, a1, c7, c6, 1 ;flush DCache entry
ADD a1, a1, a3
CMP a1, a2
BLO %BT20
MCR p15, 0, a1, c8, c5, 0 ;flush ITLB
MCR p15, 0, a1, c7, c10, 4 ;drain WBuffer
MCR p15, 0, a1, c7, c5, 0 ;flush ICache
Pull "a2, a3, pc"
;
30
MCR p15, 0, a1, c8, c7, 0 ;flush ITLB and DTLB
BL Cache_CleanAll_WB_Crd ;clean DCache (wrt to non-interrupt stuff)
MCR p15, 0, a1, c7, c5, 0 ;flush ICache
MCR p15, 0, a1, c8, c7, 0 ;flush ITLB and DTLB
Pull "a2, a3, pc"
Cache_CleanRange_WB_Crd ROUT
......@@ -2297,9 +2304,9 @@ IMB_List_WB_Cal_LD ROUT
MMU_Changing_WB_Cal_LD ROUT
Push "lr"
MCR p15, 0, a1, c8, c7, 0 ; invalidate ITLB and DTLB
BL Cache_CleanAll_WB_Cal_LD
MCR p15, 0, a1, c7, c5, 0 ; invalidate ICache and BTB
MCR p15, 0, a1, c8, c7, 0 ; invalidate ITLB and DTLB
CPWAIT
Pull "pc"
......@@ -2314,6 +2321,8 @@ MMU_ChangingEntry_WB_Cal_LD ROUT
ADD a2, a1, #PageSize
LDR lr, =ZeroPage
LDRB lr, [lr, #DCache_LineLen]
MCR p15, 0, a1, c8, c6, 1 ; invalidate DTLB entry
MCR p15, 0, a1, c8, c5, 1 ; invalidate ITLB entry
10
MCR p15, 0, a1, c7, c10, 1 ; clean DCache entry
MCR p15, 0, a1, c7, c6, 1 ; invalidate DCache entry
......@@ -2329,9 +2338,6 @@ MMU_ChangingEntry_WB_Cal_LD ROUT
|
MCR p15, 0, a1, c7, c5, 6 ; invalidate BTB
]
SUB a1, a1, #PageSize
MCR p15, 0, a1, c8, c6, 1 ; invalidate DTLB entry
MCR p15, 0, a1, c8, c5, 1 ; invalidate ITLB entry
CPWAIT
Pull "a2, pc"
......@@ -2350,6 +2356,13 @@ MMU_ChangingEntries_WB_Cal_LD ROUT
LDRB a3, [lr, #DCache_LineLen]
MOV lr, a1
10
MCR p15, 0, a1, c8, c6, 1 ; invalidate DTLB entry
MCR p15, 0, a1, c8, c5, 1 ; invalidate ITLB entry
ADD a1, a1, #PageSize
CMP a1, a2
BLO %BT10
MOV a1, lr ; restore start address
20
MCR p15, 0, a1, c7, c10, 1 ; clean DCache entry
MCR p15, 0, a1, c7, c6, 1 ; invalidate DCache entry
[ :LNOT:XScaleJTAGDebug
......@@ -2357,26 +2370,19 @@ MMU_ChangingEntries_WB_Cal_LD ROUT
]
ADD a1, a1, a3
CMP a1, a2
BLO %BT10
BLO %BT20
MCR p15, 0, a1, c7, c10, 4 ; drain WBuffer
[ XScaleJTAGDebug
MCR p15, 0, a1, c7, c5, 0 ; invalidate ICache and BTB
|
MCR p15, 0, a1, c7, c5, 6 ; invalidate BTB
]
MOV a1, lr ; restore start address
20
MCR p15, 0, a1, c8, c6, 1 ; invalidate DTLB entry
MCR p15, 0, a1, c8, c5, 1 ; invalidate ITLB entry
ADD a1, a1, #PageSize
CMP a1, a2
BLO %BT20
CPWAIT
Pull "a2, a3, pc"
;
30
BL Cache_CleanInvalidateAll_WB_Cal_LD
MCR p15, 0, a1, c8, c7, 0 ; invalidate ITLB and DTLB
BL Cache_CleanInvalidateAll_WB_Cal_LD
CPWAIT
Pull "a2, a3, pc"
......@@ -2843,21 +2849,22 @@ IMB_List_WB_CR7_Lx ROUT
Pull "a3,v1-v2,pc"
MMU_Changing_WB_CR7_Lx ROUT
Push "lr"
DSB ; Ensure the page table write has actually completed
ISB ; Also required
BL Cache_CleanInvalidateAll_WB_CR7_Lx
DSB ; Ensure the page table write has actually completed
ISB ; Also required
TLBIALL ; invalidate ITLB and DTLB
DSB ; Wait for TLB invalidation to complete
ISB ; Ensure that the effects are visible
Pull "pc"
B Cache_CleanInvalidateAll_WB_CR7_Lx
; a1 = page affected (page aligned address)
;
MMU_ChangingEntry_WB_CR7_Lx ROUT
Push "a2, lr"
DSB ; Ensure the page table write has actually completed
ISB ; Also required
DSB ; Ensure the page table write has actually completed
ISB ; Also required
TLBIMVA a1 ; invalidate DTLB and ITLB
DSB ; Wait for TLB invalidation to complete
ISB ; Ensure that the effects are visible
LDR lr, =ZeroPage
LDRB lr, [lr, #DCache_LineLen] ; log2(line len)-2
MOV a2, #4
......@@ -2879,8 +2886,6 @@ MMU_ChangingEntry_WB_CR7_Lx ROUT
ADD a1, a1, lr
CMP a1, a2
BNE %BT10
SUB a1, a1, #PageSize
TLBIMVA a1 ; invalidate DTLB and ITLB
BPIALL ; invalidate branch predictors
DSB
ISB
......@@ -2897,43 +2902,46 @@ MMU_ChangingEntries_WB_CR7_Lx ROUT
LDR lr, =ZeroPage
LDR a3, [lr, #DCache_RangeThreshold] ;check whether cheaper to do global clean
CMP a2, a3
BHS %FT30
BHS %FT90
ADD a2, a2, a1 ;clean end address (exclusive)
LDRB a3, [lr, #DCache_LineLen] ; log2(line len)-2
MOV lr, #4
MOV a3, lr, LSL a3
MOV lr, a1
10
TLBIMVA a1 ; invalidate DTLB & ITLB entry
ADD a1, a1, #PageSize
CMP a1, a2
BNE %BT10
DSB
ISB
MOV a1, lr ; Get start address back
20
DCCIMVAC a1 ; clean&invalidate DCache entry to PoC
ADD a1, a1, a3
CMP a1, a2
BNE %BT10
BNE %BT20
DSB ; Wait for clean to complete
LDR a3, =ZeroPage
LDRB a3, [a3, #ICache_LineLen] ; Use ICache line length, just in case D&I length differ
MOV a1, #4
MOV a3, a1, LSL a3
MOV a1, lr ; Get start address back
10
MOV a1, lr ; Get start address back
30
ICIMVAU a1 ; invalidate ICache entry to PoU
ADD a1, a1, a3
CMP a1, a2
BNE %BT10
20
TLBIMVA lr ; invalidate DTLB & ITLB entry
ADD lr, lr, #PageSize
CMP lr, a2
BNE %BT20
BNE %BT30
BPIALL ; invalidate branch predictors
DSB
ISB
Pull "a2, a3, pc"
;
30
BL Cache_CleanInvalidateAll_WB_CR7_Lx
90
TLBIALL ; invalidate ITLB and DTLB
DSB ; Wait TLB invalidation to complete
ISB ; Ensure that the effects are visible
BL Cache_CleanInvalidateAll_WB_CR7_Lx
Pull "a2, a3, pc"
; a1 = start address (inclusive, cache line aligned)
......@@ -3286,41 +3294,36 @@ DMB_Write_PL310 ROUT
EXIT
MMU_Changing_PL310 ROUT
Entry
DSB ; Ensure the page table write has actually completed
ISB ; Also required
BL Cache_CleanInvalidateAll_PL310
TLBIALL ; invalidate ITLB and DTLB
DSB ; Wait for TLB invalidation to complete
ISB ; Ensure that the effects are visible
EXIT
B Cache_CleanInvalidateAll_PL310
; a1 = virtual address of page affected (page aligned address)
;
MMU_ChangingEntry_PL310 ROUT
Push "a1-a3,lr"
; Keep this one simple by just calling through to MMU_ChangingEntries
MOV a3, #1
Push "a1-a2,lr"
; Do the TLB maintenance
BL MMU_ChangingUncachedEntry_WB_CR7_Lx
; Keep the rest simple by just calling through to MMU_ChangingEntries
MOV a2, #1
B %FT10
; a1 = virtual address of first page affected (page aligned address)
; a2 = number of pages
;
MMU_ChangingEntries_PL310
Push "a1-a3,lr"
MOV a3, a2
Push "a1-a2,lr"
; Do the TLB maintenance
BL MMU_ChangingUncachedEntries_WB_CR7_Lx
10 ; Arrive here from MMU_ChangingEntry_PL310
DSB ; Ensure the page table write has actually completed
ISB ; Also required
LDR a1, [sp]
; Do PL310 clean & invalidate
ADD a2, a1, a3, LSL #Log2PageSize
ADD a2, a1, a2, LSL #Log2PageSize