closes https://github.com/TryGhost/Ghost/issues/12129
Migrations were failing when upgrading from a version that didn't include stripe member subscriptions table. The `up` migration was checking for the existence of a unique index that was added to the schema in 3.29 but it was using the wrong variable name which meant that it would never return true for an existing index. For most migrations this worked because the index existed but when through 2.34 the table was created from scratch and did included the index at that point.
- fixed variable name and re-ordered variable assignment for better code locality that would have made the typo more visible
closes#12059
- Published Time and Modified Time were not populating for 'page' context because it is an extension of 'post' and hence there was no context 'page'.
- Fixed it by using the common contextObject & `getContextObject` utility.
- Should also fix some other missing parameters.
closes#12130
When defining a collection with a tag as the data source, the metadata
was not correctly applied due to the context array not including 'tag'.
This update keeps the context management all in the same context helper
file and follows the same pattern as for posts/pages as a data source.
no issue
- Stripe JS is added to a theme via ghost_head if a Stripe account is connected to members enabled site
- Previously, the script was not loading async which blocked the main thread, changes the script load to async to avoid rendering block
- Members script is already being loaded with `defer` so does not block the main thread
no issue
- we output the post excerpt in a hidden div in the email template so that email clients pick it up as the "preview" text when listing emails
- when no custom excerpt is provided the preview text is grabbed from post.excerpt which is the first 500 chars of the post.plaintext value
- post.plaintext formats links as "Link [http://url/]" which is unwanted in html email previews
- add a basic replacement to the post email serializer to remove any `[http://url/]` occurrences from the post excerpt before rendering the email content
refs https://github.com/TryGhost/Ghost/issues/11536
- Outlook supports `'` as a special char for apostrophes but not `&#apos;` which is what cheerio/juiced render
- adds a basic string placement to the email serializer to switch to the older style of special char
closes#12118
- server.start was mistakenly removed in 71f02d25e9
- it is used for loading themes (and other things) and is critical
- added tests to prevent this regressing again in future
no issue
- New Member API batched import is meant to be a substitution to current import
with improved performance while keeping same behaviore. Current
import processes 1 record at a time using internal API calls and times
out consistently when large number of members has to be imported (~10k
records without Stripe).
- New import's aim is to improve performance and process >50K
records without timing out both with and without Stripe connected
members
- Batched import can be conceptually devided into 3 stages which have
their own ways to improve performance:
1. labels - can be at current performance as number of
labels is usually small, but could also be improved through batching
2. member records + member<->labels relations - these could
be performed as batched inserts into the database
3. Stripe connections - most challanging bottleneck to solve because
API request are slow by it's nature and have to deal with rate limits of
Stripe's API itself
- It's a heavy WIP, with lots of known pitfalls which are marked with
TODOs. Will be solved iteratively through time untill the method can be
declared stable
- The new batched import method will be hidden behind 'enableDeveloperExperiments' flag to
allow early testing
- A simple way to test behaviours without having to do complex interactions to e.g. generate errors or slow requests
- Makes it easier to test the new shutdown behaviour, among other things
- there can now be quite a big delay between SIGINT/TERM being received and shutdown finishing
- add an extra log message to acknowledge the SIGINT/TERM to facilitate debugging and just be clear
- stopppable is a dependency that handles closing connections properly, which server.close does not
- active connections are allowed to complete what they are doing
- idle connections are closed
- no new connections are allowed
- we call stoppable in stop() instead of server.close so that idle connections don't hold the server open
- calling await stop() from shutdown then ensures that we have a consistent experience of stop
- all together this allows ghost to shutdown gracefully when there are long-running requests
- @TODO: handle graceful shutdown of long-running processes
- @TODO: consider do we need to send 503s whilst the server is shutting down?
- changed method to logStopMessages, as we use start and stop, not start and shutdown
- changed logStopMesasges to output the "proper" messages and use this method consistently - the closing connections message isn't really useful
- changed uptime message to always be output cos I can't see a case where there isn't interesting/useful
- Connection handling is legacy code added in 1438278ce4
- Although we were tracking all connections in memory, we weren't actually closing any because stop isn't called
- This (and restart) were both added as part of the now long-deprecated system for using Ghost directly as an npm module
- If we want to close connections cleanly, we should use a tool to do this
- the announce functions exist for the purpose of communicating with Ghost CLI
- the functions were called announceServerStart and announceServerStopped, which implies they tell Ghost CLI when the server starts and stops
- however, the true intention / purpose of these functions is to:
- either tell Ghost CLI when Ghost has successfully booted (e.g. is ready to serve requests)
- or tell Ghost CLI when the server failed to boot, and report the error so that Ghost CLI can communicate it to the user
- therefore, I've refactored the old functions into 1 function to make it clearer they do the same job, but with 2 different states
- also added some tests :D
refs 022a433e56
- noticed I missed adding debug info in one place (not that it's used, just for consistency)
- also made the SIGINT/TERM code slightly more readable
no-issue
- switch from `membersService.api.members.list` to using bookshelf `Member.findPage()` with the `{paid: true}` filter to avoid per-member queries (N+1) to decorate members with subscriptions and a heavy post-fetch filter via `contentGating`
- add concurrency to the Mailgun API requests in `bulk-email` service to reduce overall time submitting API requests
- add debug statements with timing output for easier measurements
no issue
For large numbers of members we're unable to perform queries for free/paid members in a reasonable time without indexes. Bulk delete is also very slow when looping through bookshelf models and having bookshelf manage deletion of child records, adding `ON DELETE CASCADE` to the foreign key indexes moves child deletion to the DB level and allows for performant bulk delete queries.
- migrations only run on mysql because sqlite does not support altering tables
- adds foreign key indexes and constraints with 'ON DELETE CASCADE' for members_labels, members_stripe_customers, and members_stripe_customers_subscriptions
refs #12100
For performance reasons we want to add foreign key and unique constraints
to the members_stripe_* tables so we can utilised cascading deletes and
joins across the tables when querying.
In order to do this we must first ensure that:
- There are no duplicate entries in the `subscription_id` or `customer_id` columns
- There are no orphaned rows in the subscription or customers tables
If the first is not true, the unique constraint will fail, and if the second is not true,
the foreign key constraint will fail.
As we are only adding the indexes to existing MySQL databases at this point, the
cleanup migrations will also only be done for existing MySQL databases too.
The migrations for removing orphaned rows splits the deletion into a `SELECT`
followed by a `WHERE IN` to avoid the database "optimising" the query into a
`JOIN` which ends up taking much longer due to the lack of indexes.