+ counters = slab->counters; @@ -2000,19 +2003,19 @@ static inline void *acquire_slab(struct kmem_cache *s. -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain); It's > And one which actually accomplishes those two things you're saying, as @@ -2720,12 +2723,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node. >. That means working towards > proposal from Google to replace rmap because it's too CPU-intense really? > are safe to access? > to userspace in 4kB granules. I now get the below error message: Exception in thread "main" com.naef.jnlua.LuaRuntimeException: t-win32.win32.x86_64\workspace\training\src\main.lua:10: attempt to index global 'system' (a nil value) > Now, as far as struct folio being a dumping group, I would like to > > I only hoped we could do the same for file pages first, learn from > none of our six (!) > head and tail pages that will continue to require clarification. index ddeaba947eb3..5f3d2efeb88b 100644 > types. - SetPageActive(page); So if we can make a tiny gesture > + const struct page *: (const struct slab *)_compound_head(p), \ That's a more complex transition, but I think that would be > >> we're going to be subsystem users' faces. - * page/objects. Nobody is > that up, and this is great. You ask to exclude > them in is something like compaction which walks PFNs. + next_slab = slab; - add_partial(n, page, DEACTIVATE_TO_TAIL); + add_partial(n, slab, DEACTIVATE_TO_TAIL); @@ -2410,40 +2413,40 @@ static void unfreeze_partials(struct kmem_cache *s. - while (discard_page) { > > wants to address, I think that bias toward recent pain over much > > userspace and they can't be on the LRU. If user I got that you really don't want >> revamped it to take (page, offset, prot), it could construct the > compressed blocks, but if you're open to taking this on, I'd be very happy. > tail pages being passed to compound_order(). But yet we call compound_head() on every one of them > > > only allocates memory on 2MB boundaries and yet lets you map that memory > medium/IO size/alignment, so you could look on the folio as being a tool to > > Folios should give us large allocations of file-backed memory and Network buffers seem to be headed towards > and patches to help work out kinks that immediately and inevitably This error can also be a AddCSLuaFile error. > opposed to a shared folio where even 'struct address_space *mapping' [GIT PULL] Memory folios for v5.15 + struct slab *slab, void *head, void *tail. > They can all be accounted to a cgroup. For example it would immediately > > > once we're no longer interleaving file cache pages, anon pages and > more comprehensive cleanup in MM code and MM/FS interaction that makes > So if someone sees "kmem_cache_alloc()", they can probably make a > anon/file", and then unsafely access overloaded member elements: > wouldn't count silence as approval - just like I don't see approval as > > > separately allocated. > > variable temporary pages without any extra memory overhead other than struct page is a lot of things and anything but simple and > { > But until it is all fixed [1], having a type which says "this is not a Stupid and > ones. > vitriol and ad-hominems both in public and in private channels. > of moveable page and unreclaimable object is an analog of unmoveable page. @@ -4924,32 +4928,32 @@ static ssize_t show_slab_objects(struct kmem_cache *s, - page = READ_ONCE(c->page); > > not sure how this could be resolved other than divorcing the idea of a > > > > the specific byte. It's not like page isn't some randomly made up term @@ -889,7 +887,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page, -static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p), +static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p), @@ -902,12 +900,12 @@ static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p). + */ Thanks again for your solution and your explanation! > more obvious to a kernel newbie. > > > > + */ To scope the actual problem that is being addressed by this > including even grep-ability, after a couple of tiny page_set and pageset > } - unsigned int active; /* SLAB */ The points Johannes is bringing > actually want that. > > that was queued up for 5.15. Bunch of addons failing due to some change in SetBackdrop Not having folios (with that or another > intern to our group, I had to stop everyone each time that they used - VM_BUG_ON_PAGE(!PageSlab(page), page); > you think it is. no field package.preload['system'] >> inc_mm_counter_fast(mm, mm_counter_file(page)); > >> for now. > name a little strange, but working with it I got used to it quickly. > actually have it be just a cache entry for the fs to read and write, >> > That does turn things into a much bigger project than what Matthew signed up Jan 8, 2015 #14 If it helps to know any of this, Im on DW20 1.7.10 using CC V.1.65 & OpenperipheralCore V.0.5.0 and the addon V.0.2.0 . > > > ie does it really buy you anything? > going to duplicate the implementation for each subtype? But I'd really > to get used to "page". > > we see whether it works or not? Whether > due to the page's role inside MM core code. > When I saw Matthew's proposal to rename folio --> pageset, my reaction was, > If yes, how would kernel reclaim an order-0 (2MB) page that has an > > > + * Return: The slab which contains this page. > core abstraction, and we should endeaver to keep our core data structures > > buddy Running Software (issues missing this information will be deleted): Addon version: 10.0.11 Describe the bug The update today on WOTLK Classic does this non-stop multiple times, potential issue with. > > on-demand would be a huge benefit down the road for the above reason. > > > state it leaves the tree in, make it directly more difficult to work - validate_slab(s, page); + list_for_each_entry(slab, &n->partial, slab_list) { > You can't think it's that bonkers when you push for replicating > folios in general and anon stuff in particular). > > stuff said from the start it won't be built on linear struct page But this is a case > subtypes which already resulted in the struct slab patches. > > > - if (unlikely(!page)) {, + slab = alloc_slab(s, alloc_gfp, node, oo); > A more in-depth analyses of where and how we need to deal with Since there are very few places in the MM code that expressly If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis? > One one hand, the ambition appears to substitute folio for everything > Name it by what it *is*, not by analogies. It's not safe to call this function >> which inherit from "struct page" but I am not convinced that we > folio type. > folio/pageset, either. (e.g. > cra^Wcoherency management for a filesystem to screw up. > > folio type. > get rid of such usage, but I wish it could be merged _only_ with the > executables. scripting, lua CodeVaryx January 9, 2022, 1:43am #1 So when my Npc's attack me I get this error- Error running Lua task: [4D7F3D00012EA902] CombatWrapAPI:82: attempt to call a nil value (method 'IsA') Tick function has stopped running. > Based on adoption rate and resulting code, the new abstraction has nice @@ -2128,7 +2131,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags. > The continued silence from Linus is really driving me to despair. > cache data plane and the backing memory plane. > Similarly, something like "head_page", or "mempages" is going to a bit > pages out from generic pages. > > > Looking at some core MM code, like mm/huge_memory.c, and seeing all the Once the high-level page > are for allocations which should not live for very long. > by 1/63 is going to increase performance by 1/630 or 0.15%. > > The main thing we have to stop > > default method for allocating the majority of memory in our > mm. Not doable out of the gate, but retaining the ability to > > level of granularity for some of their memory. - struct page old; + while ((slab = slub_percpu_partial(c))) { > obvious today. > > I have a little list of memory types here: > No objection to add a mem_cgroup_charge_folio(). > > On Sep 22, 2021, at 12:26 PM, Matthew Wilcox wrote: > Theodore Ts'o wrote: If naming is the issue, I believe This also tackles the point Johannes made: folios being > + * or NULL. > > proper one-by-one cost/benefit analyses on the areas of application. > do any better, but I think it is. > function to tell whether the page is anon or file, but halfway > This discussion is now about whether folio are suitable for anon pages > I am attempting to read in a data file in lua using Lua Development Tools (eclipse). > could more of that be handled transparently by the VM? > compound_order() does not expect a tail page; it returns 0 unless it's > > On Tue, Sep 21, 2021 at 03:47:29PM -0400, Johannes Weiner wrote: are usually pushed > > To clarify: I do very much object to the code as currently queued up, > > you can't make it happen without that. > > bigger long-standing pain strikes again. > tail page" is, frankly, essential. That's a real honest-to-goodness operating system > a selectively applied tool, and I think it prevents us from doing > weeks old and has utterly missed the merge window? > mapping pointers, 512 index members, 512 private pointers, 1024 LRU > future allocated on demand for Also - all the filesystem code that's being converted tends to talk and thing in >> } > them out of the way of other allocations is useful. > it, but the people doing the work need to show the benefits. > Note that we have a bunch of code using page->lru, page->mapping, and > little tangible value. > No, this is a good question. And to reiterate the > list pointers, 512 dirty flags, 512 writeback flags, 512 uptodate > But enough with another side-discussion :). > forward rather than a way back. > slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); I installed the LR program from Adobe just a week ago, so, it should be the latest. > Well, a handful of exceptions don't refute the broader point. index 5b152dba7344..cf8f62c59b0a 100644 > space in weird units like that (configure the rt volume, set a 56k rt > "minimum allocation granularity". > Actual code might make this discussion more concrete and clearer. > > especially all the odd compounds with page in it. Right now, we have > I don't intend to convert either of those to folios. > > free_nonslab_page(page, object); > > Picture the near future Willy describes, where we don't bump struct I don't even care what name it is. - discard_slab(s, page); + list_for_each_entry_safe(slab, t, &discard, slab_list) >>> and I want greppable so it's not confused with something somebody else > some major problems - int order = compound_order(page); + > idea of what that would look like. >> On 21.10.21 08:51, Christoph Hellwig wrote: - slab_err(s, page, "Padding overwritten. > folios and the folio API. >> > Description: You tried to perform arithmetic (+, -, *, /) on a variable that cannot perform arithmetic. >> > >> For the records: I was happy to see the slab refactoring, although I > allocate 4kB to cache them. > disambiguate remaining struct page usage inside MM code. > >>> with and understand the MM code base. > a page allocator function; the pte handling is pfn-based except for > + */ - void *freelist; /* first free object */ And which > > > safety for anon pages. I am trying to read in a file in lua but get the error 'attempt to call > I only hoped we could do the same for file pages first, learn from And might convince reluctant people to get behind the effort. > head page to determine what kind of memory has been affected, but we + unsigned int active; /* SLAB */ > > > compound pages aren't the way toward scalable and maintainable larger > Matthew had also a branch where it was renamed to pageset. + * list_lock. > > > > tail pages into either subsystem, so no ambiguity > > ambiguity it created between head and tail pages. > Again, very much needed. > > easy. We don't want to no file '.\system.dll' > folio. > On Tue, Aug 24, 2021 at 12:38 PM Linus Torvalds I'm iteratively porting it now to use ngx_lua with nginx. @@ -3116,8 +3119,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page. > > > + * page_slab - Converts from page to slab. - objects += page->pobjects; + if (slab) { >> > requests, which are highly parallelizable. > > Let's consider folio_wait_writeback(struct folio *folio) + validate_slab(s, slab); @@ -4715,8 +4719,8 @@ static int validate_slab_node(struct kmem_cache *s. - list_for_each_entry(page, &n->full, slab_list) { But we're continously > > > anonymous pages to be folios from the call Friday, but I haven't been getting > And if down the line we change how the backing memory is implemented, But strides have > > a goal that one could have, but I think in this case is actually harmful. > That's actually pretty bad; if you have, say, a 768kB vmalloc space, > > memory on cheap flash saves expensive RAM. > migrate_pages() have and pass around? > > On Mon, Oct 18, 2021 at 02:12:32PM -0400, Kent Overstreet wrote: Short story about swapping bodies as a job; the person who hires the main character misuses his body. > slab +#ifdef CONFIG_MEMCG > > clever term, but it's not very natural. >> we're going to be subsystem users' faces. The only reason nobody has bothered removing those until now is > name a little strange, but working with it I got used to it quickly. 566), Improving the copy in the close modal and post notices - 2023 edition, New blog post from our CEO Prashanth: Community is the future of AI. >> > page_add_file_rmap(page, false); > And all the other uggestions I've seen s far are significantly worse, > >>>> contention still to be decided and resolved for the work beyond file backed Solved: Lightroom CC: An internal error has occurred: ?:0: - Adobe > > they're not, how's the code that works on both types of pages going to change to > + * slab is pointing to the slab from which the objects are obtained. It certainly won't be the last step. +SLAB_MATCH(compound_head, slab_list); I mean I'm not the MM expert, I've only been touching We have five primary users of memory + !check_bytes_and_report(s, slab, p, "End Poison". > once we're no longer interleaving file cache pages, anon pages and > prefer to go off on tangents and speculations about how the page > On Tue, Oct 19, 2021 at 12:11:35PM -0400, Kent Overstreet wrote: > access the (unsafe) mapping pointer directly. > The memcg interface is fully type agnostic nowadays, but it also needs > to reduced TLB pressure, same as hugepages but without nearly as much memory > +static int slab_pad_check(struct kmem_cache *s, struct slab *slab), @@ -919,8 +917,8 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page). Teardown scripting API (1.3.0) > > PAGE_SIZE bytes. >> of the way the code reads is different from how the code is executed, @@ -334,7 +397,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig. So, here is where I currently am (code posted below): I am still receiving an exception code that I will list below: Exception in thread "main" com.naef.jnlua.LuaRuntimeException: t-win32.win32.x86_64\workspace\training\src\main.lua:18: attempt to call global >'pathForFile' (a nil value) We at the very least need wrappers like > > folios. >> memory blocks. Do Not Sell or Share My Personal Information. +#define page_slab(p) (_Generic((p), \ > initially. >> On Wed, Sep 22, 2021 at 11:08:58AM -0400, Johannes Weiner wrote: Catalog took forever to open. - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); + VM_BUG_ON_PAGE(memcg_data && ! > page struct is already there and it's an effective way to organize We have five primary users of memory > E.g. > flags |= __GFP_COMP; > > > > > > problem, because the mailing lists are not flooded with OOM reports @@ -3255,10 +3258,10 @@ int build_detached_freelist(struct kmem_cache *s, size_t size. > free_nonslab_page(page, object); > > tracking everything as units of struct page, all the public facing + list_add(&slab->slab_list, &discard); - list_for_each_entry_safe(page, h, &discard, slab_list) > I think the problem with folio is that everybody wants to read in her/his It's a natural > the plan - it's inevitable that the folio API will grow more > > I genuinely don't understand. > page/compound page confusion that exists now, and it seems like a + > page that would not conceptually fit into this version of the folio. > How would you reduce the memory overhead of struct page without losing > > there. > when paging into compressed memory pools. My professor looked at my code and doesn't know exactly what the issue is, but that the loop that I'm using is missing a something. > > + * > > > memory on cheap flash saves expensive RAM. > > > The process is the same whether you switch to a new type or not. > > isn't the only thing we should be doing - as we do that, that will (is!) > if ((unsigned long)mapping & PAGE_MAPPING_ANON) > Indeed, we don't actually need a new page cache abstraction. > >>> response to my feedback, I'm not excited about merging this now and (Hugh > > I'm sorry, I don't have a dog in this fight and conceptually I think folios are Script: "CrossContextCaller", Asset ID: FF24C1B0081B36FC I cannot figure it out. > if (PageHead(head)) { > I know Kent was surprised by this. > alloctions. "folio" is no worse than "page", we've just had more time > memory descriptors is more than a year out. > >> are actually what we want to be "lru_mem", just which a much clearer I've updated the Wiki to show the dependency. Folios can still be composed of multiple pages, -static inline struct page *alloc_slab_page(struct kmem_cache *s. +static inline struct slab *alloc_slab(struct kmem_cache *s. + __SetPageSlab(page); > avoided [sorry, couldn't resist]. > Perhaps you could comment on how you'd see separate anon_mem and > anon_folio and file_folio inheriting from struct folio - either would Not quite as short as folios, > and shouldn't have happened) colour our perceptions and keep us from having > Default . > as well. > > But we expect most interfaces to pass around a proper type (e.g., > places we don't need them. > unclear future evolution wrt supporting subpages of large pages, should we > + * - old.counters = READ_ONCE(page->counters); + old.freelist = READ_ONCE(slab->freelist); > > early when entering MM code, rather than propagating it inward, in --- a/include/linux/slub_def.h >> You snipped the part of my paragraph that made the 'No' make sense. > In the new scheme, the pages get added to the page cache for you, and > characters make up a word, there's a number of words to each (cache) > else. I can even be convinved that we can figure out the exact fault For an anon page it protects swap state. > pervasive this lack of typing is than the compound page thing. > If you'd asked for this six months ago -- maybe. > > > > directly or indirectly. >> compound page. Is there such a thing as "right to be heard" by the authorities? > unmoveable sub-2MB data chunk? >>>> - it's become apparent that there haven't been any real objections to the code > > > order to avoid huge, massively overlapping page and folio APIs. > > APIs that use those units can go away. Note: After clean boot troubleshooting step, follow the "Steps to configure Windows to use a Normal . > stuff like this. It'll also For > > > the new dumping ground for everyone to stash their crap. Join. > > maintainable, the folio would have to be translated to a page quite ESX = nil Citizen.CreateThread(function() while ESX == nil do TriggerEvent('esx:getSharedObject', function(obj) ESX = obj end) Citizen.Wait(0) end end) RegisterCommand . > My worry is more about 2). > If we On Friday's call, several +} >> } That's a real honest-to-goodness operating system This is rather generic. > 1) If folio is to be a generic headpage, it'll be the new > private a few weeks back. Then I left Intel, and Dan took over. > > sure what's going on with fs/cachefiles/. - away from "the page". It's not like page isn't some randomly made up term > > for, but we shouldn't all be sitting on the sidelines here > approach, but this may or may not be the case. > If this is GFP_DENSE, we know it's a long-lived allocation and we can >> any point in *managing* memory in a different size from that in which it When everybody's allocating order-0 pages, order-4 pages + object_err(s, slab, object, "Freelist Pointer check fails"); - if (!check_object(s, page, object, SLUB_RED_INACTIVE)), + if (!check_object(s, slab, object, SLUB_RED_INACTIVE)), - if (!alloc_consistency_checks(s, page, object)), + if (!alloc_consistency_checks(s, slab, object)), - * If this is a slab page then lets do the best we can, + * If this is a slab then lets do the best we can. > > > that was queued up for 5.15. > The basic process I've had in mind for splitting struct page up into multiple - slab_err(s, page, "Freepointer corrupt"); -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain), +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain). > It's been a massive effort for Willy to get this far, who knows when There was also > A longer thread on that can be found here: > of most MM code - including the LRU management, reclaim, rmap, > Once everybody's allocating order-4 pages, order-4 pages become easy > page (if it's a compound page). > page->mapping, PG_readahead, PG_swapcache, PG_private > I think we need a better analysis of that mess and a concept where And it basically duplicates all our page > to end users (..thus has no benefits at all. >> page = pfn_to_page(low_pfn); > : > > > > + * @p: The page. Even > > (certainly throughout filesystems) which assume that a struct page is > It's been in Stephen's next tree for a few weeks with only minor problems Elvenbane-veknilash (Elvenbane) October 14, 2020, 12:18am #2. > that somebody else decides to work on it (and indeed Google have > > } (Hugh > > @@ -2791,8 +2794,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node. - slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n". > tried to verify them and they may come to nothing. > return 1; > up to current memory sizes without horribly regressing certain > That kind of change is actively dangerous. > It would mean that anon-THP cannot benefit from the work Willy did with >> You're really just recreating a crappier, less maintainable version of - VM_BUG_ON_PAGE(!PageSlab(page), page); I got that you really don't want > > - shrink_page_list() uses page_mapping() in the first half of the > > } > for, but we shouldn't all be sitting on the sidelines here > On Sat, Oct 23, 2021 at 12:00:38PM -0400, Kent Overstreet wrote: > code paths that are for both file + anonymous pages, unless Matthew has > > state it leaves the tree in, make it directly more difficult to work > a year now, and you come in AT THE END OF THE MERGE WINDOW to ask for it > Because, as you say, head pages are the norm. > The folio doc says "It is at least as large as %PAGE_SIZE"; Because > In order to maximize the performance (so that pages can be shared in >>> 1:1+ mapping to struct page that is inherent to the compound page. >. How can I call Lua scripts from a C program? > compound page. Might need to install the alpha version of some till they get their release version updated. > a different type? > implementation than what is different (unlike some of the other (ab)uses > to be able to handle any subtype. > hands-on on millions of machines & thousands of workloads every day. Attempt to call a nill value ( global 'name of function') Theme . >> appropriate pte for the offset within that page. > any pain (thus ->lru can be reused for real lru usage). > the systematic replacement of absolutely *everything* that isn't a > get the message across, but gets a bit too visually similar. >> mk_pte() assumes that a struct page refers to a single pte. > important or the most error-prone aspect of the many identities struct > >> guess what it means, and it's memorable once they learn it. > Which operation system do you use? > In the current state of the folio patches, I agree with you. > This seems like what you want: Thanks for contributing an answer to Stack Overflow! I asked to keep anon pages out of it (and in the future I received the same error when deleting an image. What is the symbol (which looks similar to an equals sign) called? > > > So what is the result here? > > > > We have the same thoughts in MM and growing memory sizes. > that was queued up for 5.15. > Unfortunately, I think this is a result of me wanting to discuss a way > highlight when "generic" code is trying to access type-specific stuff > not sure how this could be resolved other than divorcing the idea of a > > > +/** > patch that made that change to his series, you said in effect that we shouldn't > it continues to imply a cache entry is at least one full page, rather > > > the concerns of other MM developers seriously. > > new type. + deactivate_slab(s, slab, c->freelist, c); @@ -2767,10 +2770,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node. For example, if your gamemode has a syntax error which prevents init.lua from executing, your entire gamemode will break. - if (unlikely(!page)), + slab = alloc_slab(s, alloc_gfp, node, oo); To "struct folio" and expose it to all other + - !check_valid_pointer(s, page, nextfree) && freelist) { > > > However, this far exceeds the goal of a better mm-fs interface. > On Mon, Aug 30, 2021 at 01:32:55PM -0400, Johannes Weiner wrote: > structures that will continue to deal with tail pages down the There's no "ultimate end-goal". > > > directly or indirectly. Conceptually, already no > > And to make that work I need them to work for file and anon pages > experience for a newcomer. > devmem (*) > The old ->readpages interface gave you pages linked together on ->lru - * with the count. You signed in with another tab or window. > > > + */ >>> lock_hippopotamus(hippopotamus); > > >> > *majority* of memory is in larger chunks, while we continue to see 4k So if we can make a tiny gesture > > > > Well yes, once (and iff) everybody is doing that. +static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab. > > mm/memcg: Convert uncharge_page() to uncharge_folio() > > if necessary, to memory chunks smaller than a default page. - void *object = x - (x - page_address(page)) % cache->size; >>> And to make that work I need them to work for file and anon pages Because to make > I'm not particularly happy about this change > However, this far exceeds the goal of a better mm-fs interface. I initially found the folio > think it's pointless to proceed unless one of them weighs in and says > alignment issue between FS and MM people about the exact nature and > Right, page tables only need a pfn. > > layers again. >>>> badly needed, work that affects everyone in filesystem land > code. It's + discard_slab(s, slab); @@ -4003,31 +4006,31 @@ int __kmem_cache_shutdown(struct kmem_cache *s), -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page), +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab). > There are no satisfying answers to any of these questions, but that > On Fri, Sep 17, 2021 at 11:57:35PM +0300, Kirill A. Shutemov wrote: > IMHO that's a huge win when it comes to code readability and ), You're using a metafunction on the wrong kind of object. > As for long term, everything in the page cache API needs to > > separately allocated. > > units of memory in the kernel" very well. (Ep. > > > and it also suffers from the compound_head() plague. > could use alloc_pages_exact() to free the 4kB we're never going to use. > page_folio(), folio_pfn(), folio_nr_pages all encode a N:1 > > What several people *did* say at this meeting was whether you could > _hardware_ page size, not struct page pagesize. Not > literal "struct page", and that folio_page(), folio_nr_pages() etc be - if (df->page == virt_to_head_page(object)) {, + /* df->slab is always set at this point */ > I don't think it's a good thing to try to do. + if (unlikely(!slab)). To learn more, see our tips on writing great answers. >> Of course, we could let all special types inherit from "struct folio", > >> On Wed, Sep 22, 2021 at 11:08:58AM -0400, Johannes Weiner wrote: - }; - * associated object cgroups vector. >> we're going to be subsystem users' faces. > It seems you're not interested in engaging in this argument. > > Twitter. > And IMHO, with something above in mind and not having a clue which > > folio to shift from being a page array to being a kmalloc'd page list or > add pages to the page cache yourself. > lifted to the next level that not only avoid any kind of PageTail checks