Commit afb010f2 authored by Jeffrey Lee's avatar Jeffrey Lee
Browse files

Improve support for VMSAv6 cache policies & memory types. Expose raw ARMops...

Improve support for VMSAv6 cache policies & memory types. Expose raw ARMops via OS_MMUControl & cache information via OS_PlatformFeatures.

Detail:
  Docs/HAL/ARMop_API - Document two new ARMops: Cache_Examine and IMB_List
  hdr/KernelWS - Shuffle workspace round a bit to allow space for the two new ARMops. IOSystemType now deleted (has been deprecated and fixed at 0 for some time)
  s/ARM600 - Cosmetic changes to BangCam to make it clearer what's going on. Add OS_MMUControl 2 (get ARMop) implementation.
  s/ARMops - Switch out different ARMop implementations and XCB tables depending on MMU model - helps reduce assembler warnings and make it clearer what code paths are and aren't possible. Add implementations of the two new ARMops. Simplify ARM_Analyse_Fancy by removing some tests which we know will have certain results. Use CCSIDR constants in ARMv7 ARMops instead of magic numbers. Update XCB table comments, and add a new table for VMSAv6
  s/ChangeDyn - Define constant for the new NCB 'idempotent' cache policy (VMSAv6 normal, non-cacheable memory)
  s/HAL - Use CCSIDR constants instead of magic numbers. Extend RISCOS_MapInIO to allow the TEX bits to be specified.
  s/Kernel - OS_PlatformFeatures 33 (read cache information) implementation (actually, just calls through to an ARMop)
  s/MemInfo - Modify VMSAv6 OS_Memory 0 cache/uncache implementation to use the XCB table instead of modifying L2_C directly. This allows the cacheability to be changed without affecting the memory type - important for e.g. unaligned accesses to work correctly. Implement cache policy support for OS_Memory 13.
  s/Middle - Remove IOSystemType from OS_ReadSysInfo 6.
  s/VMSAv6 - Make sure BangCam uses the XCB table for working out the attributes of temp-uncacheable pages instead of manipulating L2_C directly. Add OS_MMUControl 2 implementation.
  s/AMBControl/memmap - Update VMSAv6 page table pokeing to use XCB table
  s/PMF/osinit - Remove IOSystemType reference, and switch out some pre-HAL code that was trying to use IOSystemType.
Admin:
  Tested on Iyonix, ARM11, Cortex-A7, -A8, -A9, -A15
  Note that contrary to the comments in the source the default NCB policy currently maps to VMSAv6 Device memory type (as per previous kernel versions). This is just a temporary measure, and it will be switched over to Normal, non-cacheable once appropriate memory barriers have been added to the affected IO code.


Version 5.35, 4.79.2.273. Tagged as 'Kernel-5_35-4_79_2_273'
parent fc4cbde0
......@@ -249,6 +249,29 @@ the parameter more directly).
The exact value is unlikely to be critical, but a sensible value may depend
on both the ARM and external factors such as memory bus speed.
-- Cache_Examine
Return information about a given cache level
entry: r1 = cache level (0-based)
exit: r0 = Flags
bits 0-2: cache type:
000 -> none
001 -> instruction
010 -> data
011 -> split
100 -> unified
1xx -> reserved
Other bits: reserved
r1 = D line length
r2 = D size
r3 = I line length
r4 = I size
r0-r4 = zero if cache level not present
For unified caches, r1-r2 will match r3-r4. This call mainly exists for the
benefit of OS_PlatformFeatures 33.
-- WriteBuffer_Drain
......@@ -337,6 +360,24 @@ typically expected to use a threshold (related to Cache_RangeThreshold) to
decide when to perform IMB_Full instead, being faster for large ranges.
-- IMB_List
A variant of IMB_Range that accepts a list of address ranges.
entry: r0 = pointer to word-aligned list of (start, end) address pairs
r1 = pointer to end of list (past last valid entry)
r2 = total amount of memory to be synchronised
If you have several areas to synchronise then using this call may result in
significant performance gains, both from reducing the function call overhead
and from optimisations in the algorithm itself (e.g. only flushing instruction
cache once for StrongARM).
As with IMB_Range, start & end addresses are inclusive-exclusive and must be
cache line aligned. The list must contain at least one entry, and must not
contain zero-length entries.
MMU mapping ARMops
------------------
......
......@@ -13,11 +13,11 @@
GBLS Module_ComponentPath
Module_MajorVersion SETS "5.35"
Module_Version SETA 535
Module_MinorVersion SETS "4.79.2.272"
Module_Date SETS "26 Jul 2015"
Module_ApplicationDate SETS "26-Jul-15"
Module_MinorVersion SETS "4.79.2.273"
Module_Date SETS "05 Aug 2015"
Module_ApplicationDate SETS "05-Aug-15"
Module_ComponentName SETS "Kernel"
Module_ComponentPath SETS "castle/RiscOS/Sources/Kernel"
Module_FullVersion SETS "5.35 (4.79.2.272)"
Module_HelpVersion SETS "5.35 (26 Jul 2015) 4.79.2.272"
Module_FullVersion SETS "5.35 (4.79.2.273)"
Module_HelpVersion SETS "5.35 (05 Aug 2015) 4.79.2.273"
END
......@@ -5,19 +5,19 @@
*
*/
#define Module_MajorVersion_CMHG 5.35
#define Module_MinorVersion_CMHG 4.79.2.272
#define Module_Date_CMHG 26 Jul 2015
#define Module_MinorVersion_CMHG 4.79.2.273
#define Module_Date_CMHG 05 Aug 2015
#define Module_MajorVersion "5.35"
#define Module_Version 535
#define Module_MinorVersion "4.79.2.272"
#define Module_Date "26 Jul 2015"
#define Module_MinorVersion "4.79.2.273"
#define Module_Date "05 Aug 2015"
#define Module_ApplicationDate "26-Jul-15"
#define Module_ApplicationDate "05-Aug-15"
#define Module_ComponentName "Kernel"
#define Module_ComponentPath "castle/RiscOS/Sources/Kernel"
#define Module_FullVersion "5.35 (4.79.2.272)"
#define Module_HelpVersion "5.35 (26 Jul 2015) 4.79.2.272"
#define Module_FullVersion "5.35 (4.79.2.273)"
#define Module_HelpVersion "5.35 (05 Aug 2015) 4.79.2.273"
#define Module_LibraryVersionInfo "5:35"
......@@ -1209,18 +1209,16 @@ DCache_NSets # 4
DCache_Size # 4
DCache_LineLen # 1
DCache_Associativity # 1
# 2
ProcessorArch # 1
ProcessorType # 1 ; Processor type (handles 600 series onwards)
DCache_CleanBaseAddress # 0 ; word used either for IndexBit or CleanBaseAddress
DCache_IndexBit # 4
DCache_CleanNextAddress # 0 ; word used either for IndexSegStart or CleanNextAddress
DCache_IndexSegStart # 4
DCache_RangeThreshold # 4
ProcessorArch # 1
]
IOSystemType # 1 ; 0 => old I/O subsystem, 1 => IOEB+82C710 system, 2..255 => ?
ProcessorType # 1 ; Processor type (handles 600 series onwards)
AlignSpace
ProcessorFlags # 4 ; Processor flags (IMB, Arch4 etc)
[ :DEF: ShowWS
......@@ -1237,11 +1235,13 @@ Proc_Cache_CleanInvalidateAll # 4
Proc_Cache_CleanAll # 4
Proc_Cache_InvalidateAll # 4
Proc_Cache_RangeThreshold # 4
Proc_Cache_Examine # 4
Proc_TLB_InvalidateAll # 4
Proc_TLB_InvalidateEntry # 4
Proc_WriteBuffer_Drain # 4
Proc_IMB_Full # 4
Proc_IMB_Range # 4
Proc_IMB_List # 4
Proc_MMU_Changing # 4
Proc_MMU_ChangingEntry # 4
Proc_MMU_ChangingUncached # 4
......
......@@ -483,17 +483,32 @@ AMB_SetMemMapEntries ROUT
;get L2PT protection etc. bits, appropriate to PPL in R9, into R11
ADRL r1,PPLTrans
AND lr,r9,#3
LDR r7,=ZeroPage
LDR r11,[r1,lr,LSL #2]
[ MEMM_Type = "VMSAv6"
; VMSAv6 is tricky, use XCBTable/PCBTrans
ASSERT DynAreaFlags_CPBits = 7*XCB_P :SHL: 10
ASSERT DynAreaFlags_NotCacheable = XCB_NC :SHL: 4
ASSERT DynAreaFlags_NotBufferable = XCB_NB :SHL: 4
LDR r2,[r7,#MMU_PCBTrans]
TST r9,#PageFlags_TempUncacheableBits
AND r1,r9,#DynAreaFlags_NotCacheable + DynAreaFlags_NotBufferable
AND r0,r9,#DynAreaFlags_CPBits
ORRNE r1,r1,#XCB_TU<<4 ; if temp uncache, set TU bit
ORR r1,r1,r0,LSR #10-4
LDRB r1,[r2,r1,LSR #4] ; convert to X, C and B bits for this CPU
ORR r11,r11,r1
|
TST r9,#DynAreaFlags_NotCacheable
TSTEQ r9,#PageFlags_TempUncacheableBits
ORREQ r11,r11,#L2_C ;if cacheable (area bit CLEAR + temp count zero), then OR in C bit
TST r9,#DynAreaFlags_NotBufferable
ORREQ r11,r11,#L2_B ;if bufferable (area bit CLEAR), then OR in B bit
]
MOV r10,r4 ;ptr to next page number
LDR r2,[r10] ;page number of 1st page
LDR r7,=ZeroPage
LDR r7,[r7,#CamEntriesPointer] ;r7 -> CAM
ADD r1,r7,r2,LSL #3 ;r1 -> CAM entry for 1st page
[ AMB_LimpidFreePool
......
......@@ -245,9 +245,9 @@ BangCamAltEntry
ADRNE r1, PPLTransX ; always use extended pages if supported
LDR r1, [r1, r4, LSL #2] ; get PPL bits and SmallPage indicator
ASSERT DynAreaFlags_CPBits = 7 :SHL: 12
ASSERT DynAreaFlags_NotCacheable = 1 :SHL: 5
ASSERT DynAreaFlags_NotBufferable = 1 :SHL: 4
ASSERT DynAreaFlags_CPBits = 7*XCB_P :SHL: 10
ASSERT DynAreaFlags_NotCacheable = XCB_NC :SHL: 4
ASSERT DynAreaFlags_NotBufferable = XCB_NB :SHL: 4
ORR r0, r0, r1
......@@ -256,7 +256,7 @@ BangCamAltEntry
AND r1, r11, #DynAreaFlags_NotCacheable + DynAreaFlags_NotBufferable
TST r11, #PageFlags_TempUncacheableBits
ORRNE r1, r1, #DynAreaFlags_NotCacheable ; if temp uncache, set NC bit, ignore P
ORREQ r1, r1, r4, LSR #6 ; else use NC, NB and P bits
ORREQ r1, r1, r4, LSR #10-4 ; else use NC, NB and P bits
LDRB r1, [r6, r1, LSR #4] ; convert to X, C and B bits for this CPU
ORR r0, r0, r1
......@@ -415,11 +415,17 @@ SSETMEMC ROUT
; r0 bit 28 set if write buffer to be flushed (implied by bit 31)
; r1 = entry specifier, if r0 bit 29 set
; (currently, flushing by entry is ignored, and just does full flush)
;
; in: r0 bits 0-7 = 2: reason code 2, read ARMop
; r0 bits 15-8 = ARMop index
;
; out: r0 = ARMop function ptr
;
^ 0
MMUCReason_ModifyControl # 1 ; reason code 0
MMUCReason_Flush # 1 ; reason code 1
MMUCReason_GetARMop # 1
MMUCReason_Unknown # 0
MMUControlSWI Entry
......@@ -436,6 +442,7 @@ MMUControlSub
B MMUControl_Unknown
B MMUControl_ModifyControl
B MMUControl_Flush
B MMUControl_GetARMop
MMUControl_Unknown
ADRL r0, ErrorBlock_HeapBadReason
......@@ -526,6 +533,15 @@ MMUControl_Flush
ADDS r0,r10,#0
Pull "pc"
MMUControl_GetARMop
AND r0, r0, #&FF00
CMP r0, #(ARMopPtrTable_End-ARMopPtrTable):SHL:6
BHS MMUControl_Unknown
ADRL lr, ARMopPtrTable
LDR r0, [lr, r0, LSR #6]
LDR r0, [r0]
Pull "pc"
; +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
;
; Exception veneers
......
This diff is collapsed.
......@@ -793,9 +793,10 @@ DynAreaFlags_CPBits * 7 :SHL: 12 ; cache policy variant for NotBu
;
CP_NCNB_Default * 0 ; no policy variants
CP_NCB_Default * 0 ; OS decides buffer policy (currently always coalescing)
CP_NCB_Default * 0 ; OS decides buffer policy (currently always MergingIdempotent)
CP_NCB_NonMerging * 1 ; Non-merging write buffer. If not available, unbuffered.
CP_NCB_Merging * 2 ; Merging write buffer. If not available, non-merging.
CP_NCB_MergingIdempotent * 3 ; Merging write buffer with idempotent memory (i.e. VMSA "Normal" non-cacheable type). If not available, merging write buffer.
CP_CNB_Default * 0 ; OS decides cache policy (writethrough). NCNB if not available
CP_CNB_Writethrough * 1 ; Writethrough cacheable, non-buffered. If not available, NCNB.
......
......@@ -1398,13 +1398,13 @@ HAL_InvalidateCache_ARMvF
MCR p15, 2, r11, c0, c0, 0 ; write CSSELR from r11
myISB ,r9
MRC p15, 1, r9, c0, c0, 0 ; read current CSSIDR to r9
AND r10, r9, #&7 ; extract the line length field
AND r10, r9, #CCSIDR_LineSize_mask ; extract the line length field
ADD r10, r10, #4 ; add 4 for the line length offset (log2 16 bytes)
LDR r8, =&3FF
AND r8, r8, r9, LSR #3 ; r8 is the max number on the way size (right aligned)
LDR r8, =CCSIDR_Associativity_mask:SHR:CCSIDR_Associativity_pos
AND r8, r8, r9, LSR #CCSIDR_Associativity_pos ; r8 is the max number on the way size (right aligned)
CLZ r13, r8 ; r13 is the bit position of the way size increment
LDR r12, =&7FFF
AND r12, r12, r9, LSR #13 ; r12 is the max number of the index size (right aligned)
LDR r12, =CCSIDR_NumSets_mask:SHR:CCSIDR_NumSets_pos
AND r12, r12, r9, LSR #CCSIDR_NumSets_pos ; r12 is the max number of the index size (right aligned)
20 ; Loop2
MOV r9, r12 ; r9 working copy of the max index size (right aligned)
30 ; Loop3
......@@ -2326,7 +2326,9 @@ InitProcVec_FIQ
InitProcVecsEnd
;
; In: a1 = flags (L1_B,L1_C,L1_AP,L1_APX)
; In: a1 = flags (L1_B,L1_C,L1_TEX)
; bit 20 set if doubly mapped
; bit 21 set if L1_AP specified (else default to AP_None)
; a2 = physical address
; a3 = size
; Out: a1 = assigned logical address, or 0 if failed (no room)
......@@ -2339,16 +2341,17 @@ InitProcVecsEnd
ASSERT L1_C = 1:SHL:3
[ MEMM_Type = "VMSAv6"
ASSERT L1_AP = 2_100011 :SHL: 10
ASSERT L1_TEX = 2_111 :SHL: 12
|
ASSERT L1_AP = 3:SHL:10
ASSERT L1_TEX = 2_1111 :SHL: 12
]
MapInFlag_DoublyMapped * 1:SHL:20
MapInFlag_APSpecified * 1:SHL:21
RISCOS_MapInIO ROUT
Entry "v1-v5,v7"
MOV v7, #L1_B:OR:L1_C
ORR v7, v7, #L1_AP ; v7 = user-specifiable flags
LDR v7, =L1_B:OR:L1_C:OR:L1_AP:OR:L1_TEX ; v7 = user-specifiable flags
MOV v5, a1 ; v5 = original flags
MOV v4, a2 ; v4 = original requested address
ADD a3, a2, a3 ; a3 -> end (exclusive)
......@@ -2370,7 +2373,7 @@ RISCOS_MapInIO ROUT
LDR ip, =ZeroPage
LDR a4, =L1PT
AND a1, a1, v7 ; only allow bufferable as flags option
AND a1, a1, v7 ; mask out unsupported attributes
[ MEMM_Type = "VMSAv6"
ORR a1, a1, #L1_XN ; force non-executable to prevent speculative instruction fetches
]
......
......@@ -1598,11 +1598,14 @@ Issue_Service_SWI ROUT
; 1 -> read MMU features (ROL, unimplemented here)
; 2-31 -> reserved just in case ROL have used them
; 32 -> read processor vectors location
; 33 -> read cache information
PlatFeatSWI ROUT
Push lr
CMP r0, #32 ;Is it a known reason code?
BEQ %FT30
CMP r0, #33
BEQ %FT40
CMP r0, #0
BNE %FT50 ;No, so send out a service call
......@@ -1632,6 +1635,27 @@ platfeat_irqinsert
Pull lr
B SLVK
40
; Read cache information
; In: r1 = cache level (0-based)
; Out: r0 = Flags
; bits 0-2: cache type:
; 000 -> none
; 001 -> instruction
; 010 -> data
; 011 -> split
; 100 -> unified
; 1xx -> reserved
; Other bits: reserved
; r1 = D line length
; r2 = D size
; r3 = I line length
; r4 = I size
; r0-r4 = zero if cache level not present
ARMop Cache_Examine
Pull lr
B SLVK
50
[ {FALSE}
Push "r1-r8"
......
......@@ -217,18 +217,36 @@ MemoryConvert ROUT
BNE %BT10 ; Do next entry if we don't have to change L2.
MOV r4, r4, LSR #12
LDR r3, =ZeroPage
ADD r4, r8, r4, LSL #2 ; Address of L2 entry for logical address.
[ MEMM_Type = "VMSAv6"
; VMSAv6 is hard, use XCBTable/PCBTrans
ASSERT DynAreaFlags_CPBits = 7*XCB_P :SHL: 10
ASSERT DynAreaFlags_NotCacheable = XCB_NC :SHL: 4
ASSERT DynAreaFlags_NotBufferable = XCB_NB :SHL: 4
TST r0, #cacheable_bit ; n.b. must match EQ/NE used by ARMop calls
AND lr, r5, #DynAreaFlags_NotCacheable + DynAreaFlags_NotBufferable
AND r5, r5, #DynAreaFlags_CPBits
ORR lr, lr, r5, LSR #10-4
LDR r5, [r3, #MMU_PCBTrans]
ORREQ lr, lr, #XCB_TU<<4 ; if temp uncache, set TU bit
LDRB lr, [r5, lr, LSR #4] ; convert to X, C and B bits for this CPU
LDR r5, [r4] ; Get L2 entry (safe as we know address is valid).
BIC r5, r5, #(L2_C+L2_B+L2_TEX) :AND: 255 ; Knock out existing attributes (n.b. assumed to not be large page!)
ORR r5, r5, lr ; Set new attributes
STR r5, [r4] ; Write back new L2 entry.
|
LDR r5, [r4] ; Get L2 entry (safe as we know address is valid).
TST r0, #cacheable_bit
BICEQ r5, r5, #L2_C ; Disable/enable cacheability.
ORRNE r5, r5, #L2_C
STR r5, [r4] ; Write back new L2 entry.
]
MOV r5, r0
ASSERT (L2PT :SHL: 12) = 0 ; Ensure we can convert r4 back to the page log addr
ASSERT (L2PT :SHL: 10) = 0 ; Ensure we can convert r4 back to the page log addr
MOV r0, r4, LSL #10
; *** KJB - this assumes that uncacheable pages still allow cache hits (true on all
; ARMs so far).
LDR r3, =ZeroPage
ADR lr, %FT65
ARMop MMU_ChangingEntry,EQ,tailcall,r3 ; Clean cache & TLB
ARMop MMU_ChangingUncachedEntry,NE,tailcall,r3 ; Clean TLB
......@@ -975,7 +993,7 @@ RP_error
; In: r0 bits 0..7 = 13 (reason code 13)
; r0 bit 8 = 1 to map bufferable space (0 is normal, non-bufferable)
; r0 bit 9 = 1 to map cacheable space (0 is normal, non-cacheable)
; r0 bits 10..12 = 0 (reserved for cache policy)
; r0 bits 10..12 = cache policy
; r0 bits 13..15 = 0 (reserved flags)
; r0 bit 16 = 1 to doubly map
; r0 bit 17 = 1 if access privileges specified
......@@ -991,11 +1009,23 @@ RP_error
MapIOpermanent ROUT
Push "r0-r2,r12,lr"
MOV lr, r0
TST lr, #1:SHL:8 ;test bufferable bit
MOVNE r0, #L1_B
MOVEQ r0, #0
TST lr, #1:SHL:9 ;test cacheable bit
ORRNE r0, r0, #L1_C
LDR r12, =ZeroPage
ASSERT XCB_NB = 1:SHL:0
ASSERT XCB_NC = 1:SHL:1
ASSERT XCB_P = 1:SHL:2
AND r0, r0, #&1F00
MOV r0, r0, LSR #8
LDR r12, [r12, #MMU_PCBTrans]
EOR r0, r0, #XCB_NB+XCB_NC ; Invert C+B to match XCBTable
LDRB r0, [r12, r0]
; Convert from L2 attributes to L1
ASSERT L1_C = L2_C
ASSERT L1_B = L2_B
ASSERT L2_TEXShift < L1_TEXShift
AND r12, r0, #L2_TEX
BIC r0, r0, #L2_TEX
ORR r0, r0, r12, LSL #L1_TEXShift-L2_TEXShift
; Deal with other flags
TST lr, #1:SHL:16
ORRNE r0, r0, #MapInFlag_DoublyMapped
TST lr, #1:SHL:17
......
......@@ -1844,7 +1844,7 @@ osri6_table
DCD ZeroPage+Module_List ;9
DCD ZeroPage+ModuleSHT_Entries ;10
DCD ZeroPage+ModuleSWI_HashTab ;11
DCD ZeroPage+IOSystemType ;12
DCD 0 ;12 (was IOSystemType)
DCD L1PT ;13
DCD L2PT ;14
DCD UNDSTK ;15
......
......@@ -514,9 +514,6 @@ ReadMachineType Entry "r0-r12"
MOV r0, #4_3330 ; Assume VGA during osinit
STRB r0, [r1, #MonitorLeadType]
MOV r2, #0 ; Deprecated, just zero it
STRB r2, [r1, #IOSystemType]
EXIT
|
MOV r12, #IOMD_Base
......@@ -890,6 +887,7 @@ PowerHardware ;On Stork, ensure Combo chip, Winnie, Floppy etc are pow
EXIT
]
[ :LNOT: HAL
[ STB
; +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
;
......@@ -1161,6 +1159,7 @@ Configure37C665 Entry "r0,r1"
EXIT
]
] ; :LNOT: HAL
; +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
;
......
......@@ -164,31 +164,21 @@ BangCamAltEntry
AND r4, r11, #3 ; first use PPL bits
LDR r1, [r1, r4, LSL #2] ; get PPL bits and SmallPage indicator
[ {FALSE}
TST r11, #DynAreaFlags_NotCacheable
TSTEQ r11, #PageFlags_TempUncacheableBits
ORREQ r1, r1, #L2_C ; if cacheable (area bit CLEAR + temp count zero), then OR in C bit
TST r11, #DynAreaFlags_NotBufferable
ORREQ r1, r1, #L2_B ; if bufferable (area bit CLEAR), then OR in B bit
ORR r0, r0, r1
|
ASSERT DynAreaFlags_CPBits = 7 :SHL: 12
ASSERT DynAreaFlags_NotCacheable = 1 :SHL: 5
ASSERT DynAreaFlags_NotBufferable = 1 :SHL: 4
ASSERT DynAreaFlags_CPBits = 7*XCB_P :SHL: 10
ASSERT DynAreaFlags_NotCacheable = XCB_NC :SHL: 4
ASSERT DynAreaFlags_NotBufferable = XCB_NB :SHL: 4
ORR r0, r0, r1
LDR r6, =ZeroPage
LDR r6, [r6, #MMU_PCBTrans]
AND r4, r11, #DynAreaFlags_CPBits
AND r1, r11, #DynAreaFlags_NotCacheable + DynAreaFlags_NotBufferable
TST r11, #PageFlags_TempUncacheableBits
ORRNE r1, r1, #DynAreaFlags_NotCacheable ; if temp uncache, set NC bit, ignore P
ORREQ r1, r1, r4, LSR #6 ; else use NC, NB and P bits
AND r1, r11, #DynAreaFlags_NotCacheable + DynAreaFlags_NotBufferable
AND r4, r11, #DynAreaFlags_CPBits
ORRNE r1, r1, #XCB_TU<<4 ; if temp uncache, set TU bit
ORR r1, r1, r4, LSR #10-4
LDRB r1, [r6, r1, LSR #4] ; convert to X, C and B bits for this CPU
ORR r0, r0, r1
]
LDR r1, =L2PT ; point to level 2 page tables
......@@ -360,11 +350,17 @@ SSETMEMC ROUT
; r0 bit 28 set if write buffer to be flushed (implied by bit 31)
; r1 = entry specifier, if r0 bit 29 set
; (currently, flushing by entry is ignored, and just does full flush)
;
; in: r0 bits 0-7 = 2: reason code 2, read ARMop
; r0 bits 15-8 = ARMop index
;
; out: r0 = ARMop function ptr
;
^ 0
MMUCReason_ModifyControl # 1 ; reason code 0
MMUCReason_Flush # 1 ; reason code 1
MMUCReason_GetARMop # 1
MMUCReason_Unknown # 0
MMUControlSWI Entry
......@@ -381,6 +377,7 @@ MMUControlSub
B MMUControl_Unknown
B MMUControl_ModifyControl
B MMUControl_Flush
B MMUControl_GetARMop
MMUControl_Unknown
ADRL r0, ErrorBlock_HeapBadReason
......@@ -485,6 +482,15 @@ MMUControl_Flush
ADDS r0,r10,#0
Pull "pc"
MMUControl_GetARMop
AND r0, r0, #&FF00
CMP r0, #(ARMopPtrTable_End-ARMopPtrTable):SHL:6
BHS MMUControl_Unknown
ADRL lr, ARMopPtrTable
LDR r0, [lr, r0, LSR #6]
LDR r0, [r0]
Pull "pc"
; +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
;
; Exception veneers
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment