-
Notifications
You must be signed in to change notification settings - Fork 4.8k
HIVE-27224: Enhance drop table/partition command #5851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
...ne-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
Outdated
Show resolved
Hide resolved
...ne-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
Show resolved
Hide resolved
| part_vals = getPartValsFromName(t, dropPartitionReq.getPartName()); | ||
| } | ||
| partNames.add(Warehouse.makePartName(t.getPartitionKeys(), part_vals)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| part_vals = getPartValsFromName(t, dropPartitionReq.getPartName()); | |
| } | |
| partNames.add(Warehouse.makePartName(t.getPartitionKeys(), part_vals)); | |
| partNames.add(dropPartitionReq.getPartName()); | |
| } else { | |
| partNames.add(Warehouse.makePartName(t.getPartitionKeys(), part_vals)); | |
| } |
...lone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RawStore.java
Show resolved
Hide resolved
...lone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RawStore.java
Show resolved
Hide resolved
...ne-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
Outdated
Show resolved
Hide resolved
...store-server/src/main/java/org/apache/hadoop/hive/metastore/handler/DropDatabaseHandler.java
Outdated
Show resolved
Hide resolved
| private A result; | ||
| private boolean async; | ||
| private Future<A> future; | ||
| private ExecutorService executor; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better to use a shared thread pool for the operation handler? In the current implementation, the number of threads is not bounded, which could lead to resource exhaustion or even crashes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The number of handler's threads is limited by the max threads the thrift server can spawn, which is set by hive.metastore.server.max.threads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In production, I think we shouldn't not have such a high drop databases/table operations happens near the same time, usually the database is the bottleneck before the Metastore hits the limit, if this is the case, we can tune down the hive.metastore.server.max.threads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the async mode, a thread may trigger multiple operation handlers, so the hive.metastore.server.max.threads could not limit total threads here. If we configure a fixed size pool for the async operations, it can help limit service traffic to some extent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually the new handler runs inside the same thread as the parent, such as DropDatabaseHandler, involves multiple DropTableHandlers, these DropTableHandlers runs inside the same thread as the DropDatabaseHandler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually the new handler runs inside the same thread as the parent, such as DropDatabaseHandler, involves multiple DropTableHandlers, these DropTableHandlers run inside the same thread as the DropDatabaseHandler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I mean that for async request, the HMSHandler thread could create an operation handler where the executor starts a new thread. Then the HMSHandler thread finish this request immediatly and could handle another request where it may produce another new thread.
If such async request is frequent, it may lead to an explosion in the number of threads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though Metastore handles the async request in background, the client doesn't, it pings the server for the status until the end:
Lines 1524 to 1529 in 79f63b6
| while (!resp.isFinished() && !Thread.currentThread().isInterrupted()) { | |
| resp = client.drop_database_req(req); | |
| if (resp.getMessage() != null) { | |
| LOG.info(resp.getMessage()); | |
| } | |
| } |
the client will know whether the request is successful or not as usual at the end, and the Metastore needs a handler thread to answer the request.
the HMSHandler thread finish this request immediatly
Line 179 in 79f63b6
result = async ? future.get(timeout, TimeUnit.MILLISECONDS) : future.get();
Now it will wait for 5 seconds before answering the API for long running drop unless the request is satisfied within this timeout, then we can get the result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are customized clients based on ThriftHiveMetastore.Iface, which may not guarantee such behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
asyncDrop defaults to false, unless it's specified explicitly as true, then the customized client should take care of his case.
...store-server/src/main/java/org/apache/hadoop/hive/metastore/handler/DropDatabaseHandler.java
Show resolved
Hide resolved
...ne-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
Outdated
Show resolved
Hide resolved
|
A couple of test failures seem to be related to this patch. |
| wh.addToChangeManagement(funcCmPath); | ||
| } | ||
| if (req.isDeleteData()) { | ||
| // Moving the data deletion out of the async handler. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should move this into the operation handler, because if a thrift client only calls this api once in async mode, then such cleanup code would never be run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In async mode, the client still need to ping the server for the operation status until the end, the client needs to know whether the request is a failure or not.
The main reason is the TUGIBasedProcessor/TUGIAssumingProcessor might close the shared FileSystem behind, causing the "java.io.IOException: Filesystem closed" for the handler running in background.
Still we need to address this "Filesystem closed" issue, as we don't know whether there are Filesystem operations in the Metastore listeners.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FileSystem.closeAllForUGI(clientUgi); in TUGIAssumingProcessor seems a bug, assume that there are two requests with same ugi to handle the same path uri concurrently, it may also hit the "Filesystem closed" issue.
This is indeed a tricky problem, not sure if we can only remove cache for inactive ugi to solve it. And for this thread, it still has an issue if the client crush between two pings before the operation handler finished, the cleanup code will not take effect either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice catch, we should take the client crash into the picture
...-server/src/main/java/org/apache/hadoop/hive/metastore/handler/AbstractOperationHandler.java
Show resolved
Hide resolved
e603abc to
e54f5a9
Compare
| if (ugiTransport.getClientUGI() == null) { | ||
| ugiTransport.setClientUGI(clientUgi); | ||
| } | ||
| clientUgi = ugiTransport.getClientUGI(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this line unnecessary? clientUgi is already initialized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ugi is identical: https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L483-L491,
reuse the ugi cached in ugiTransport if possible so the connection will get the same FileSystem instance from cache in the whole lifetime
...ne-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
Outdated
Show resolved
Hide resolved
| if (request.isNeedResult()) { | ||
| AddPartitionsHandler addPartsOp = AbstractOperationHandler.offer(this, request); | ||
| if (addPartsOp.success() && request.isNeedResult()) { | ||
| AddPartitionsHandler.AddPartitionsResult addPartsResult = addPartsOp.getResult(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we store the partition list in the AddPartitionsResult and return it directly here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no enough, the addPartsOp.success() need to check on the state(success or not) of addPartsOp.getResult()
| if (async) { | ||
| OPID_CLEANER.schedule(() -> OPID_TO_HANDLER.remove(id), 1, TimeUnit.HOURS); | ||
| } | ||
| afterExecute(resultV); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If afterExecute() is needed only when the execute() is success, we can check the result here
| afterExecute(resultV); | |
| if (resultV != null && resultV.success()) { | |
| afterExecute(resultV); | |
| }``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the afterExecute is also called in case of failure to free up some resources the handler might hold
|



What changes were proposed in this pull request?
Why are the changes needed?
Does this PR introduce any user-facing change?
How was this patch tested?